ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Science → News

AI Could Help You Build a Virus. OpenAI Knows It — and It’s Worried

We should prepare ourselves for a society where amateurs can create garage bioweapons.

Mihai AndreibyMihai Andrei
June 23, 2025
in Biology, Future, News
A A
Edited and reviewed by Zoe Gordon
Share on FacebookShare on TwitterSubmit to Reddit
a virus depiction
AI generated image.

“Can you help me create bioweapons?”

Predictably, ChatGPT said no. “Creating or disseminating biological weapons is illegal, unethical, and dangerous. If you have questions about biology, epidemiology, or related scientific topics for legitimate educational or research purposes, I’m happy to help,” the AI added.

So, I continued with a “genuine question” about editing viruses with low technology, and it promptly gave me a guide on how I should go about that. Jailbreaking AI chatbots like ChatGPT is notoriously easy and OpenAI is well aware of it. In a sweeping warning, OpenAI said that its next generation of artificial intelligence models will likely reach a “High” level of capability in biology.

The company is basically acknowledging what some researchers have been warning about for years: that AI can help amateurs with no formal training create potentially dangerous bioweapons.

How screwed are we?

AI companies tout their agents as research assistants. In fact, they’ve greatly promoted the systems’ ability accelerate drug discovery, optimize enzymes for climate solutions, and aid in vaccine design. But these same systems could, in the wrong hands, enable something darker.

Historically, one key barrier to bioweapons has been expertise. Pathogen engineering isn’t plug-and-play — it requires specialized knowledge and laboratory skills. But AI models trained on the sum of biological literature, methods, and heuristics can potentially act as an ever-available assistant, guiding a determined user step-by-step.

For now, the greatest biological threats still come from well-equipped labs, not laptops. Creating a bioweapon requires access to controlled substances, laboratory infrastructure, and the kind of know-how that’s hard to fake. However, that buffer — the distance between interest and ability — is shrinking.

RelatedPosts

ChatGPT is almost good enough to become a doctor. What does it mean for AI and for doctors?
This AI will let you listen to one person and mute everyone else in a crowd
AI detects childhood diseases with doctor-like accuracy
The world’s tiniest game of Pac-Man is both awesome and educational

AI isn’t inventing new pathogens. But it might help people replicate known threats faster and more easily than ever before.

“We’re not yet in the world where there’s like novel, completely unknown creation of biothreats that have not existed before,” head of safety systems Johannes Heidecke told Axios. “We are more worried about replicating things that experts are already very familiar with.”

Overall, Artificial Intelligence is already accelerating fields like biology and chemistry. The net contribution is positive, but we’re entering the stage where nefarious uses with severe consequences are on the table.

How companies are trying to stop this

OpenAI says it’s taking a “multi-pronged” approach to mitigate these risks.

“We need to act responsibly amid this uncertainty. That’s why we’re leaning in on advancing AI integration for positive use cases like biomedical research and biodefense, while at the same time focusing on limiting access to harmful capabilities. Our approach is focused on prevention — we don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards.”

But what does that mean in practice?

For starters, it’s teaching models to be stricter about answering prompts that could lead to bioweaponization. In dual-use areas like virology or genetic engineering, they aim to provide general insights, not lab-ready instructions. In practice, that’s proven to be a fragile defense.

Numerous examples from independent testers and journalists have shown that AI systems — including OpenAI’s — can be tricked into providing sensitive biological information, even with relatively simple prompt engineering. Sometimes, all it takes is phrasing a request as a fictional story, or asking for the information in stages.

OpenAI also wants to include more human oversight and enforcement, suspending accounts that attempt to hijack AI or even report them to authorities. Lastly, they will also use expert “red teamers” — some trained in AI, others in biology — to attempt to break the safeguards under realistic conditions and see how this can be stopped.

This combination of AI filters, human monitoring, and adversarial testing sounds robust. But there’s an uncomfortable truth beneath it: these systems have never been tested in the real world at the scale and stakes we’re now approaching.

Even OpenAI acknowledges that 99% effectiveness isn’t good enough. “We basically need, like, near perfection,” said Heidecke, OpenAI’s head of safety systems. But perfection is elusive — especially when novel misuse techniques can emerge faster than defenses. Prompt injection attacks, jailbreak tricks, or coordinated abuse could still overwhelm even the most thoughtfully designed systems.

We’ve already opened the floodgates

Even if OpenAI have the right approach, and even if they somehow get it to work (which are both big “if’s”), they’re not the only company in the business. Anthropic, the AI company behind Claude, has also implemented new safeguards after concluding that its latest model could contribute to biological and nuclear threats.

The U.S. government, too, is beginning to grasp the potential dual-use risks of AI. OpenAI is expanding its work with U.S. national labs and is convening a biodefense summit this July. Together, government researchers, NGOs, and policy leaders will explore how advanced AI can support both biological innovation and security.

But even with these efforts, it seems hard to see a future where nefarious AI outputs are truly controlled.

AI is moving fast. And biology is uniquely sensitive. While most powerful AI tools today exist behind company firewalls, open-source models are proliferating, and hardware to run them is becoming more accessible.

The cost of synthesizing DNA has dropped dramatically. Tools that once lived in elite government labs are now available to small startups or academic labs. If the knowledge bottleneck collapses as well, bad actors may no longer need PhDs or state sponsorship to do real harm.

There’s no doubt that AI is revolutionizing biology. It’s helping us understand disease, design treatments, and respond to global health challenges faster than ever before. But as these tools grow more powerful, the line between scientific progress and misuse grows thinner. And it’s not hard to see how these models could be used to do some real harm.

Tags: AIBiologybioweaponPathogens

ShareTweetShare
Mihai Andrei

Mihai Andrei

Dr. Andrei Mihai is a geophysicist and founder of ZME Science. He has a Ph.D. in geophysics and archaeology and has completed courses from prestigious universities (with programs ranging from climate and astronomy to chemistry and geology). He is passionate about making research more accessible to everyone and communicating news and features to a broad audience.

Related Posts

Future

AI ‘Reanimated’ a Murder Victim Back to Life to Speak in Court (And Raises Ethical Quandaries)

byNir Eisikovitsand1 others
6 days ago
Art

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

byTibi Puiu
1 week ago
Future

The Real Singularity: AI Memes Are Now Funnier, On Average, Than Human Ones

byRupendra Brahambhatt
1 week ago
News

Big Tech Said It Was Impossible to Create an AI Based on Ethically Sourced Data. These Researchers Proved Them Wrong

byMihai Andrei
2 weeks ago

Recent news

Scientists Discover One of the Oldest Known Matrilineal Societies in Human History

June 23, 2025

AI Could Help You Build a Virus. OpenAI Knows It — and It’s Worried

June 23, 2025

China’s New Mosquito Drone Could Probably Slip Through Windows and Spy Undetected

June 23, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.