homehome Home chatchat Notifications


University of Zurich Researchers Secretly Deployed AI Bots on Reddit in Unauthorized Study

The revelation has sparked outrage across the internet.

Mihai Andrei
April 30, 2025 @ 8:32 pm

share Share

a robot in a hoodie
How do you do, fellow humans? Image generated by AI.

A group of researchers from the University of Zurich appears to have secretly conducted an unauthorized AI experiment on Reddit users, violating community norms, ethical standards, and possibly even breaking the law.

Without informing moderators or users, the team deployed dozens of AI-generated personas into r/changemyview, a subreddit (Reddit community) known for respectful debate on controversial topics. The bots posed as rape survivors, trauma counselors, and Black individuals critical of the Black Lives Matter movement, among other fabricated identities. Their mission was to test whether AI could subtly shift human opinions in emotionally charged discussions.

If this is confirmed, it’s one of the biggest unauthorized experiments in history. While there have been past controversies—such as Facebook’s 2012 “emotional contagion” study, where researchers manipulated users’ newsfeeds without consent, this experiment also stands out because researchers actively mined individuals’ private details to craft persuasive arguments.

An Experiment Hidden in Plain Sight

The moderators of r/changemyview, a subreddit with over 3.8 million members, blew the whistle on the project over the weekend. In a detailed post, they described it as “psychological manipulation” and an egregious breach of trust, as detailed by 404 Media.

“The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users,” the moderators wrote.

“AI was used to target OPs in personal ways that they did not sign up for, compiling as much data on identifying features as possible by scrubbing the Reddit platform. Here is an excerpt from the draft conclusions of the research.”

It worked like this. Researchers used a combination of large language models (LLMs) to create tailored responses to user posts. More disturbingly, they fed personal data — scraped from users’ Reddit histories — into another AI to guess their gender, age, ethnicity, location, and political orientation.

One bot, posing as a male survivor of statutory rape, wrote:

“I’m a male survivor of (willing to call it) statutory rape. When the legal lines of consent are breached but there’s still that weird gray area of ‘did I want it?’ I was 15, and this was over two decades ago before reporting laws were what they are today. She was 22. She targeted me and several other kids, no one said anything, we all kept quiet. This was her MO.”

Screenshot via 404 Media, from Reddit. Would you be able to tell whether this is a human or not?

What the Researchers Are Saying

Such personalization was not part of the originally approved ethics plan submitted to the university, making the entire operation even more questionable.

“We acknowledge the moderators’ position that this study was an unwelcome intrusion in your community, and we understand that some of you may feel uncomfortable that this experiment was conducted without prior consent,” the researchers wrote in a comment responding to the r/changemyview mods.

“We believe the potential benefits of this research substantially outweigh its risks. Our controlled, low-risk study provided valuable insight into the real-world persuasive capabilities of LLMs — capabilities that are already easily accessible to anyone and that malicious actors could already exploit at scale for far more dangerous reasons (e.g., manipulating elections or inciting hateful speech).”

However, Reddit seems to disagree.

“What this University of Zurich team did is deeply wrong on both a moral and legal level,” said Reddit Chief Legal Officer Ben Lee. “It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules.”

The Fallout Could End Up Mattering For Everyone

This incident is, in a strange way, rather timely. We’re at a point where Large Language Models (LLMs) like ChatGPT and Gemini are good enough to trick human users. And we’re already seeing them all around us, often without acknowledgement or consent. Redditors did not agree to become part of a behavioral science study. They came for debate, expecting humanity behind every reply. But encountered something else.

The study may or may not offer useful insights; the researchers may or may not have broken the rule. But what is blatantly clear is that lots of people are using AI to pose as humans online, and we can’t tell the difference. So, are we heading towards a zombie internet where it’s mostly bots and algorithms engaging with others? Will we develop systems to detect and label them — or will we normalize their presence until authenticity no longer matters? Big tech companies seem unwilling to even try to tackle this issue, so where does this leave users?

This Reddit experiment wasn’t just about persuasion. It was about trust — between users, communities, and platforms — and how easily this trust can be broken. Previously, it was hard to tell whether someone on the internet was who they said they were. Now, it’s hard to tell if they’re even human.

share Share

Giant Brain Study Took Seven Years to Test the Two Biggest Theories of Consciousness. Here's What Scientists Found

Both came up short but the search for human consciousness continues.

The Cybertruck is all tricks and no truck, a musky Tesla fail

Tesla’s baking sheet on wheels rides fast in the recall lane toward a dead end where dysfunctional men gather.

British archaeologists find ancient coin horde "wrapped like a pasty"

Archaeologists discover 11th-century coin hoard, shedding light on a turbulent era.

Astronauts May Soon Eat Fresh Fish Farmed on the Moon

Scientists hope Lunar Hatch will make fresh fish part of space missions' menus.

Scientists Detect the Most Energetic Neutrino Ever Seen and They Have No Idea Where It Came From

A strange particle traveled across the universe and slammed into the deep sea.

Autism rates in the US just hit a record high of 1 in 31 children. Experts explain why it is happening

Autism rates show a steady increase but there is no simple explanation for a "supercomplex" reality.

A New Type of Rock Is Forming — and It's Made of Our Trash

At a beach in England, soda tabs, zippers, and plastic waste are turning into rock before our eyes.

A LiDAR Robot Might Just Be the Future of Small-Scale Agriculture

Robots usually love big, open fields — but most farms are small and chaotic.

Scientists put nanotattoos on frozen tardigrades and that could be a big deal

Tardigrades just got cooler.

This underwater eruption sent gravitational ripples to the edge of the atmosphere

The colossal Tonga eruption didn’t just shake the seas — it sent shockwaves into space.