homehome Home chatchat Notifications


AI Bots Were Made to Use a Stripped Down Social Network With No Curation Algorithms and They Still Formed Toxic Echo Chambers

Can social media make emotionless AI bots pursue a political ideology? The answer will shock you.

Rupendra Brahambhatt
August 28, 2025 @ 4:24 pm

share Share

Credit: ZME Science.

Social media platforms have long been blamed for fueling polarization, disinformation, and toxic debates. The usual suspects are their algorithms, which are designed to keep people hooked by pushing outrage and sensationalism. In the process, they let loose our basest instincts. However, what if the problem runs deeper, not just in the algorithms, but in the very structure of social media itself? 

A new study from researchers at the University of Amsterdam suggests exactly that. In a surprising experiment, the study authors built a stripped-down social media platform populated entirely by AI chatbots. There were no ads, no recommendation algorithms, no trending tabs, or any other hidden tricks to keep users scrolling on this platform.

Yet, even in this bare-bones environment, the bots quickly split into echo chambers, amplified extreme voices, and rewarded the most partisan content. These findings strongly indicate that perhaps social media, in its current form, is inherently flawed.

Our “study has demonstrated that key dysfunctions of social media – ideological homophily, attention inequality, and the amplification of extreme voices – can arise even in a minimal simulated environment that includes only posting, reposting, and following, in the absence of recommendation algorithms or engagement optimization,” the researchers said.

How did social media make bots fall for political ideologies?

The researchers first created a minimalist platform that included only three basic functions: posting, reposting, and following.  They then populated this platform with 500 AI chatbots, each powered by OpenAI’s GPT-4o mini. To simulate a diverse user base, each chatbot was given a persona with a fixed political leaning — some leaned left, some right, and some were moderate. 

These personas shaped the way the bots interacted, who they chose to follow, what kind of posts they created, and how they responded to other bots. Next came the simulations. In five large-scale runs, the bots performed a total of 10,000 actions each time. 

Every action was logged so the researchers could track patterns, including which posts got the most engagement, how followers clustered, and whether communities split along ideological lines. Soon, the bots began to form polarized clusters, following those who thought like them while ignoring opposing views.

Interestingly, the most partisan accounts became the most influential. Bots that posted strong political opinions gained the most followers and reposts, while moderate voices received little attention. This created a sharp inequality where a small group of extreme accounts dominated the conversation, mirroring what happens in real-world platforms like Facebook and X.

“We observe correlations between political extremity and engagement. Users with more partisan profiles tend to receive slightly more followers (r = 0.11) and reposts (r = 0.09). While relatively weak, this correlation suggests the presence of a ‘social media prism,’ where more polarized users and content attract disproportionate attention,” the researchers said.

To see if the outcome could be changed, the team tested six common proposals for fixing social media. They tried chronological feeds, reducing the weight of viral content, hiding follower and repost numbers, hiding user bios, amplifying opposing views, and diversifying feeds. 

Each intervention was tested under the same conditions to see if it could disrupt the drift toward echo chambers. The results were shocking. None of the fixes worked well, and most made only small improvements — at best, no more than a six percent reduction in engagement with partisan accounts. 

In fact, in some cases, the changes backfired. Chronological feeds ended up pushing extreme content to the top, while hiding user bios gave even more attention to polarized voices. More importantly, even when an intervention improved one dysfunction, such as reducing attention inequality, it often worsened another, such as amplifying toxic content. 

We must fix the problems with social media

The study’s findings paint a troubling picture. They suggest that polarization, echo chambers, and toxic amplification may be baked into the very structure of social media, not just its recommendation algorithms. 

Our “findings challenge the common view that social media’s dysfunctions are primarily the result of algorithmic curation. Instead, these problems may be rooted in the very architecture of social media platforms that grow through emotionally reactive sharing,” the researchers added.

If such dysfunction emerges in a simple environment with only bots, posting, and following, then real-world platforms, with billions of human users and profit-driven recommendation engines, may be destined to exacerbate these problems even further.

In this case, improving online discourse will require more than technical tweaks. It may demand a fundamental redesign of how social media works, from how connections are formed to how attention is distributed. Otherwise, as generative AI floods platforms with even more content, the toxic polarization on social media could accelerate.

It is also important to note that “LLM-based agents, while offering rich representations of human behavior, function as black boxes and carry risks of embedded bias. The findings of this study should hence not be taken as definitive conclusions, but as a starting point for further inquiry,” the researchers added.

The study is published in the journal arXiv.

share Share

Scientists Make Succulents That Glow in the Dark Like Living Night Lights

These glowing succulents could one day replace street lamps.

US Military Just Tested a Microwave Weapon That Instantly Zapped an Entire Swarm of Drones Out of the Sky

The U.S. military tests a powerful new defense against drone swarms.

Global Farmlands Already Grow Enough Food to Feed 15 Billion People but Half of Calories Never Make It to our Plates

Nearly half of the world’s food calories go to animals and engines instead of people.

Astronomers Warn That Satellite Mega-Constellations Could Steal the Night Sky Forever

The race for space internet is colliding with humanity’s oldest science.

Japan Just Switched on Asia’s First Osmotic Power Plant, Which Runs 24/7 on Nothing But Fresh Water and Seawater

A renewable energy source that runs day and night, powered by salt and fresh water.

Geologists Thought Rocks Take Millennia to Form. On This English Coastline, They’re Appearing in Decades

Soda tabs, zippers, and plastic waste are turning into rock before our eyes.

Magic Mushrooms Change How People Look at Art But Not How Much They Like it

On psychedelics, eyes fixate on details rather than wandering freely.

Ancient Teeth in Ethiopia Reveal Early Humans Lived Alongside a Mystery Species Nearly 2.8 Million Years Ago

Ancient teeth are rewriting the story of our evolution.

Doctors with More Patient Complaints Also More Likely to Take Industry Money, Study Finds

There seems to be a concerning link between patient complaints and industry payouts.

A 12,000-Year-Old Skeleton With a Hidden Quartz Arrowhead in Vietnam May Be the Earliest Evidence of Violence in Southeast Asia

12,000-year-old burial reveals a mystery of survival, care, and conflict