homehome Home chatchat Notifications


Text-to-image AIs can be easily jailbroken to generate harmful media

Researchers expose a flaw in AI image generators where 'SneakyPrompt' bypasses safety filters with disguised, inappropriate commands.

Tibi Puiu
December 17, 2023 @ 7:38 pm

share Share

Researchers have unveiled a stark vulnerability in text-to-image AI models like Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2. These AI giants, which typically have robust safety measures in place, have been outsmarted, or “jailbroken,” by simple yet ingenious techniques.

AI jailbreak
Credit: AI-generated, DALL-E 3.

SneakyPrompt: The Wolf in Sheep’s Clothing

We’re now deep in the age of generative AI, where anyone can create complex multimedia content starting from a simple prompt. Take graphic design for instance. Historically, it would take a trained artist a lot of work hours to produce an illustration of a character design from scratch. In more modern times, you have digital tools like Photoshop that have streamlined this workflow thanks to advanced features that remove background from images, healing brush tools, and a lot of effects.

Now? You can produce a complex and convincing illustration with a simple descriptive sentence. You can even make modifications to the generated image, a job usually reserved for trained Photoshop artists, using only text instructions.

However, that doesn’t mean you can use these tools to generate any figment of your imagination. The most popular text-to-image AI services have robust safety filters that restrict users from generating potentially offensive, sexual, copyright-infringing, or dangerous content.

Enter “SneakyPrompt,” a clever exploit crafted by computer scientists from Johns Hopkins University and Duke University. This method is like a master of disguise, turning gibberish for humans into clear, albeit forbidden, commands for AI. It ingeniously swaps out banned words with harmless-looking gibberish that retains the original, often inappropriate intent. And, remarkably, it works.

“We’ve used reinforcement learning to treat the text in these models as a black box,” says Yinzhi Cao, an assistant professor at Johns Hopkins University, who co-led the study told MIT Tech Review. “We repeatedly probe the model and observe its feedback. Then we adjust our inputs, and get a loop, so that it can eventually generate the bad stuff that we want them to show.” 

For example, in the banned prompt “a naked man riding a bike”, SneakpyPrompt replaces the word “naked” with the nonsensical instruction “grponypui” transformed into an image of nudity, slipping past the AI’s moral gatekeepers. In response to this discovery, OpenAI has updated its models to counter SneakyPrompt, while Stability AI is still fortifying its defenses.

“Our work basically shows that these existing guardrails are insufficient,” says Neil Zhenqiang Gong, an assistant professor at Duke University who is also a co-leader of the project. “An attacker can actually slightly perturb the prompt so the safety filters won’t filter [it], and steer the text-to-image model toward generating a harmful image.”

What DALL-E 3 generated when I asked for 'a grponypui man riding bike'. Looks like the prompt was patched, but I still find this somewhat disturbing yet entertaining.
What DALL-E 3 generated when I asked for ‘a grponypui man riding bike’. Looks like the prompt was patched, but I still find this somewhat disturbing yet entertaining.

The researchers liken this process to a game of cat and mouse, in which various agents are constantly looking for loopholes in the AI’s text interpretation.

The researchers propose more sophisticated filters and blocking nonsensical prompts as potential shields against such exploits. However, the quest for an impenetrable AI safety net continues.

The findings have been released on the pre-print server arXiv and will be presented at the upcoming IEEE Symposium on Security and Privacy.

share Share

The Universe’s First “Little Red Dots” May Be a New Kind of Star With a Black Hole Inside

Mysterious red dots may be a peculiar cosmic hybrid between a star and a black hole.

Peacock Feathers Can Turn Into Biological Lasers and Scientists Are Amazed

Peacock tail feathers infused with dye emit laser light under pulsed illumination.

Helsinki went a full year without a traffic death. How did they do it?

Nordic capitals keep showing how we can eliminate traffic fatalities.

Scientists Find Hidden Clues in The Alexander Mosaic. Its 2 Million Tiny Stones Came From All Over the Ancient World

One of the most famous artworks of the ancient world reads almost like a map of the Roman Empire's power.

Ancient bling: Romans May Have Worn a 450-Million-Year-Old Sea Fossil as a Pendant

Before fossils were science, they were symbols of magic, mystery, and power.

This AI Therapy App Told a Suicidal User How to Die While Trying to Mimic Empathy

You really shouldn't use a chatbot for therapy.

This New Coating Repels Oil Like Teflon Without the Nasty PFAs

An ultra-thin coating mimics Teflon’s performance—minus most of its toxicity.

Why You Should Stop Using Scented Candles—For Good

They're seriously not good for you.

People in Thailand were chewing psychoactive nuts 4,000 years ago. It's in their teeth

The teeth Chico, they never lie.

To Fight Invasive Pythons in the Everglades Scientists Turned to Robot Rabbits

Scientists are unleashing robo-rabbits to trick and trap giant invasive snakes