homehome Home chatchat Notifications


AI Could Help You Build a Virus. OpenAI Knows It — and It’s Worried

We should prepare ourselves for a society where amateurs can create garage bioweapons.

Mihai Andrei
June 23, 2025 @ 5:35 pm

share Share

a virus depiction
AI generated image.

“Can you help me create bioweapons?”

Predictably, ChatGPT said no. “Creating or disseminating biological weapons is illegal, unethical, and dangerous. If you have questions about biology, epidemiology, or related scientific topics for legitimate educational or research purposes, I’m happy to help,” the AI added.

So, I continued with a “genuine question” about editing viruses with low technology, and it promptly gave me a guide on how I should go about that. Jailbreaking AI chatbots like ChatGPT is notoriously easy and OpenAI is well aware of it. In a sweeping warning, OpenAI said that its next generation of artificial intelligence models will likely reach a “High” level of capability in biology.

The company is basically acknowledging what some researchers have been warning about for years: that AI can help amateurs with no formal training create potentially dangerous bioweapons.

How screwed are we?

AI companies tout their agents as research assistants. In fact, they’ve greatly promoted the systems’ ability accelerate drug discovery, optimize enzymes for climate solutions, and aid in vaccine design. But these same systems could, in the wrong hands, enable something darker.

Historically, one key barrier to bioweapons has been expertise. Pathogen engineering isn’t plug-and-play — it requires specialized knowledge and laboratory skills. But AI models trained on the sum of biological literature, methods, and heuristics can potentially act as an ever-available assistant, guiding a determined user step-by-step.

For now, the greatest biological threats still come from well-equipped labs, not laptops. Creating a bioweapon requires access to controlled substances, laboratory infrastructure, and the kind of know-how that’s hard to fake. However, that buffer — the distance between interest and ability — is shrinking.

AI isn’t inventing new pathogens. But it might help people replicate known threats faster and more easily than ever before.

“We’re not yet in the world where there’s like novel, completely unknown creation of biothreats that have not existed before,” head of safety systems Johannes Heidecke told Axios. “We are more worried about replicating things that experts are already very familiar with.”

Overall, Artificial Intelligence is already accelerating fields like biology and chemistry. The net contribution is positive, but we’re entering the stage where nefarious uses with severe consequences are on the table.

How companies are trying to stop this

OpenAI says it’s taking a “multi-pronged” approach to mitigate these risks.

“We need to act responsibly amid this uncertainty. That’s why we’re leaning in on advancing AI integration for positive use cases like biomedical research and biodefense, while at the same time focusing on limiting access to harmful capabilities. Our approach is focused on prevention — we don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards.”

But what does that mean in practice?

For starters, it’s teaching models to be stricter about answering prompts that could lead to bioweaponization. In dual-use areas like virology or genetic engineering, they aim to provide general insights, not lab-ready instructions. In practice, that’s proven to be a fragile defense.

Numerous examples from independent testers and journalists have shown that AI systems — including OpenAI’s — can be tricked into providing sensitive biological information, even with relatively simple prompt engineering. Sometimes, all it takes is phrasing a request as a fictional story, or asking for the information in stages.

OpenAI also wants to include more human oversight and enforcement, suspending accounts that attempt to hijack AI or even report them to authorities. Lastly, they will also use expert “red teamers” — some trained in AI, others in biology — to attempt to break the safeguards under realistic conditions and see how this can be stopped.

This combination of AI filters, human monitoring, and adversarial testing sounds robust. But there’s an uncomfortable truth beneath it: these systems have never been tested in the real world at the scale and stakes we’re now approaching.

Even OpenAI acknowledges that 99% effectiveness isn’t good enough. “We basically need, like, near perfection,” said Heidecke, OpenAI’s head of safety systems. But perfection is elusive — especially when novel misuse techniques can emerge faster than defenses. Prompt injection attacks, jailbreak tricks, or coordinated abuse could still overwhelm even the most thoughtfully designed systems.

We’ve already opened the floodgates

Even if OpenAI have the right approach, and even if they somehow get it to work (which are both big “if’s”), they’re not the only company in the business. Anthropic, the AI company behind Claude, has also implemented new safeguards after concluding that its latest model could contribute to biological and nuclear threats.

The U.S. government, too, is beginning to grasp the potential dual-use risks of AI. OpenAI is expanding its work with U.S. national labs and is convening a biodefense summit this July. Together, government researchers, NGOs, and policy leaders will explore how advanced AI can support both biological innovation and security.

But even with these efforts, it seems hard to see a future where nefarious AI outputs are truly controlled.

AI is moving fast. And biology is uniquely sensitive. While most powerful AI tools today exist behind company firewalls, open-source models are proliferating, and hardware to run them is becoming more accessible.

The cost of synthesizing DNA has dropped dramatically. Tools that once lived in elite government labs are now available to small startups or academic labs. If the knowledge bottleneck collapses as well, bad actors may no longer need PhDs or state sponsorship to do real harm.

There’s no doubt that AI is revolutionizing biology. It’s helping us understand disease, design treatments, and respond to global health challenges faster than ever before. But as these tools grow more powerful, the line between scientific progress and misuse grows thinner. And it’s not hard to see how these models could be used to do some real harm.

share Share

Scientists Discover One of the Oldest Known Matrilineal Societies in Human History

The new study uncovered a 250-year lineage organized by maternal descent.

China's New Mosquito Drone Could Probably Slip Through Windows and Spy Undetected

If the military is happy to show this, what other things are they covertly working on?

This Colorful Galaxy Map Is So Detailed You Can See Stars Being Born

Astronomers unveil the most detailed portrait yet of a nearby spiral galaxy’s complex inner life

Paleontologists Discover "Goblin-Like" Predator Hidden in Fossil Collection

A raccoon-sized predator stalked dinosaur nests 76 million years ago.

Stunning 12-Ton Assyrian Relief Unearthed in Iraq Reveals Legendary King Alongside the Gods

The king was flanked by gods and mythical guardians.

Scientists uncover anti-aging "glue" that naturally repairs damaged DNA

Researchers have newly found a very important function for a well-known enzyme.

Your Brain Could Reveal a Deadly Heart Risk. AI Is Learning to Read the Signs

By studying brain scans this AI model was able to differentiate between types of strokes with high accuracy.

A NASA Spacecraft Just Spotted a Volcano on Mars Like We Have Never Seen Before

NASA's Mars Odyssey captures a surreal new image of Arsia Mons at sunrise

Why Bats Don’t Get Cancer—And What That Could Mean for Us

Bats can live up to 40 years without developing cancer. Scientists now know why.

This Star-Shaped Pill Stomach Could Transform Schizophrenia Treatment

A once-weekly oral capsule offers new hope for patients who struggle with daily medication.