homehome Home chatchat Notifications


AI Could Help You Build a Virus. OpenAI Knows It — and It’s Worried

We should prepare ourselves for a society where amateurs can create garage bioweapons.

Mihai Andrei
June 23, 2025 @ 5:35 pm

share Share

a virus depiction
AI generated image.

“Can you help me create bioweapons?”

Predictably, ChatGPT said no. “Creating or disseminating biological weapons is illegal, unethical, and dangerous. If you have questions about biology, epidemiology, or related scientific topics for legitimate educational or research purposes, I’m happy to help,” the AI added.

So, I continued with a “genuine question” about editing viruses with low technology, and it promptly gave me a guide on how I should go about that. Jailbreaking AI chatbots like ChatGPT is notoriously easy and OpenAI is well aware of it. In a sweeping warning, OpenAI said that its next generation of artificial intelligence models will likely reach a “High” level of capability in biology.

The company is basically acknowledging what some researchers have been warning about for years: that AI can help amateurs with no formal training create potentially dangerous bioweapons.

How screwed are we?

AI companies tout their agents as research assistants. In fact, they’ve greatly promoted the systems’ ability accelerate drug discovery, optimize enzymes for climate solutions, and aid in vaccine design. But these same systems could, in the wrong hands, enable something darker.

Historically, one key barrier to bioweapons has been expertise. Pathogen engineering isn’t plug-and-play — it requires specialized knowledge and laboratory skills. But AI models trained on the sum of biological literature, methods, and heuristics can potentially act as an ever-available assistant, guiding a determined user step-by-step.

For now, the greatest biological threats still come from well-equipped labs, not laptops. Creating a bioweapon requires access to controlled substances, laboratory infrastructure, and the kind of know-how that’s hard to fake. However, that buffer — the distance between interest and ability — is shrinking.

AI isn’t inventing new pathogens. But it might help people replicate known threats faster and more easily than ever before.

“We’re not yet in the world where there’s like novel, completely unknown creation of biothreats that have not existed before,” head of safety systems Johannes Heidecke told Axios. “We are more worried about replicating things that experts are already very familiar with.”

Overall, Artificial Intelligence is already accelerating fields like biology and chemistry. The net contribution is positive, but we’re entering the stage where nefarious uses with severe consequences are on the table.

How companies are trying to stop this

OpenAI says it’s taking a “multi-pronged” approach to mitigate these risks.

“We need to act responsibly amid this uncertainty. That’s why we’re leaning in on advancing AI integration for positive use cases like biomedical research and biodefense, while at the same time focusing on limiting access to harmful capabilities. Our approach is focused on prevention — we don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards.”

But what does that mean in practice?

For starters, it’s teaching models to be stricter about answering prompts that could lead to bioweaponization. In dual-use areas like virology or genetic engineering, they aim to provide general insights, not lab-ready instructions. In practice, that’s proven to be a fragile defense.

Numerous examples from independent testers and journalists have shown that AI systems — including OpenAI’s — can be tricked into providing sensitive biological information, even with relatively simple prompt engineering. Sometimes, all it takes is phrasing a request as a fictional story, or asking for the information in stages.

OpenAI also wants to include more human oversight and enforcement, suspending accounts that attempt to hijack AI or even report them to authorities. Lastly, they will also use expert “red teamers” — some trained in AI, others in biology — to attempt to break the safeguards under realistic conditions and see how this can be stopped.

This combination of AI filters, human monitoring, and adversarial testing sounds robust. But there’s an uncomfortable truth beneath it: these systems have never been tested in the real world at the scale and stakes we’re now approaching.

Even OpenAI acknowledges that 99% effectiveness isn’t good enough. “We basically need, like, near perfection,” said Heidecke, OpenAI’s head of safety systems. But perfection is elusive — especially when novel misuse techniques can emerge faster than defenses. Prompt injection attacks, jailbreak tricks, or coordinated abuse could still overwhelm even the most thoughtfully designed systems.

We’ve already opened the floodgates

Even if OpenAI have the right approach, and even if they somehow get it to work (which are both big “if’s”), they’re not the only company in the business. Anthropic, the AI company behind Claude, has also implemented new safeguards after concluding that its latest model could contribute to biological and nuclear threats.

The U.S. government, too, is beginning to grasp the potential dual-use risks of AI. OpenAI is expanding its work with U.S. national labs and is convening a biodefense summit this July. Together, government researchers, NGOs, and policy leaders will explore how advanced AI can support both biological innovation and security.

But even with these efforts, it seems hard to see a future where nefarious AI outputs are truly controlled.

AI is moving fast. And biology is uniquely sensitive. While most powerful AI tools today exist behind company firewalls, open-source models are proliferating, and hardware to run them is becoming more accessible.

The cost of synthesizing DNA has dropped dramatically. Tools that once lived in elite government labs are now available to small startups or academic labs. If the knowledge bottleneck collapses as well, bad actors may no longer need PhDs or state sponsorship to do real harm.

There’s no doubt that AI is revolutionizing biology. It’s helping us understand disease, design treatments, and respond to global health challenges faster than ever before. But as these tools grow more powerful, the line between scientific progress and misuse grows thinner. And it’s not hard to see how these models could be used to do some real harm.

share Share

A Supermassive Black Hole 36 Billion Times the Mass of the Sun Might Be the Heaviest Ever Found

In a massive galaxy, known for its unique visual effect lies an even more massive black hole.

Why Some People Don't Feel Anything At All Listening to Music

Up to 5% of people feel indifferent to music and a brain pathway may explain why.

The US Navy Just Tested a Laser Weapon That Could Change Warfare Forever

The HELIOS system can instantly zap enemy drones with precision.

Vesuvius Eruption Turned This Roman Man’s Brain Into Glass 2,000 Years Ago and Scientists Just Figured Out How

A deadly ash cloud preserved the man's brain as glass for thousands of years.

Archeologists Recreate the Faces of Two Sisters Who Worked in a Prehistoric Mine 6,000 Years Ago

Prehistoric sisters rise again in 3D after thousands of years underground.

The tragic story of the warrah wolf, a species too friendly to survive

They didn’t run away from us. It killed them in the end.

Scientists Have Identified 4 Distinct Types of Autism Each With Its Own Genetic Signature

Researchers uncover hidden biological patterns that may explain autism’s vast diversity

Illinois Just Became the First State to Ban AI From Acting as a Therapist

The law aims to keep mental health care in human hands — not algorithms

Cooking From Scratch Helps You Lose More Fat Even if the Calories Are the Same As Processed Foods

Minimally processed diets helped people lose more fat and resist cravings more effectively.

Scientists Gave People a Fatty Milkshake. It Turned Out To Be a "Brain Bomb"

A greasy takeaway may seem like an innocent Friday night indulgence. But our recent research suggests even a single high-fat meal could impair blood flow to the brain, potentially increasing the risk of stroke and dementia. Dietary fat is an important part of our diet. It provides us with a concentrated source of energy, transports […]