ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Science → News

AI Bots Were Made to Use a Stripped Down Social Network With No Curation Algorithms and They Still Formed Toxic Echo Chambers

Can social media make emotionless AI bots pursue a political ideology? The answer will shock you.

Rupendra BrahambhattbyRupendra Brahambhatt
August 28, 2025
in Future, News
A A
Edited and reviewed by Tibi Puiu
Share on FacebookShare on TwitterSubmit to Reddit
Credit: ZME Science.

Social media platforms have long been blamed for fueling polarization, disinformation, and toxic debates. The usual suspects are their algorithms, which are designed to keep people hooked by pushing outrage and sensationalism. In the process, they let loose our basest instincts. However, what if the problem runs deeper, not just in the algorithms, but in the very structure of social media itself? 

A new study from researchers at the University of Amsterdam suggests exactly that. In a surprising experiment, the study authors built a stripped-down social media platform populated entirely by AI chatbots. There were no ads, no recommendation algorithms, no trending tabs, or any other hidden tricks to keep users scrolling on this platform.

Yet, even in this bare-bones environment, the bots quickly split into echo chambers, amplified extreme voices, and rewarded the most partisan content. These findings strongly indicate that perhaps social media, in its current form, is inherently flawed.

Our “study has demonstrated that key dysfunctions of social media – ideological homophily, attention inequality, and the amplification of extreme voices – can arise even in a minimal simulated environment that includes only posting, reposting, and following, in the absence of recommendation algorithms or engagement optimization,” the researchers said.

How did social media make bots fall for political ideologies?

The researchers first created a minimalist platform that included only three basic functions: posting, reposting, and following.  They then populated this platform with 500 AI chatbots, each powered by OpenAI’s GPT-4o mini. To simulate a diverse user base, each chatbot was given a persona with a fixed political leaning — some leaned left, some right, and some were moderate. 

These personas shaped the way the bots interacted, who they chose to follow, what kind of posts they created, and how they responded to other bots. Next came the simulations. In five large-scale runs, the bots performed a total of 10,000 actions each time. 

Every action was logged so the researchers could track patterns, including which posts got the most engagement, how followers clustered, and whether communities split along ideological lines. Soon, the bots began to form polarized clusters, following those who thought like them while ignoring opposing views.

Interestingly, the most partisan accounts became the most influential. Bots that posted strong political opinions gained the most followers and reposts, while moderate voices received little attention. This created a sharp inequality where a small group of extreme accounts dominated the conversation, mirroring what happens in real-world platforms like Facebook and X.

RelatedPosts

Artificial intelligence still has severe limitations in recognizing what it’s seeing
Google just let an Artificial Intelligence take care of cooling a data center
China released an open source kung-fu robot and we’re not really sure why
Talking whales? AI reveals a complex language hidden in sperm whale clicks

“We observe correlations between political extremity and engagement. Users with more partisan profiles tend to receive slightly more followers (r = 0.11) and reposts (r = 0.09). While relatively weak, this correlation suggests the presence of a ‘social media prism,’ where more polarized users and content attract disproportionate attention,” the researchers said.

To see if the outcome could be changed, the team tested six common proposals for fixing social media. They tried chronological feeds, reducing the weight of viral content, hiding follower and repost numbers, hiding user bios, amplifying opposing views, and diversifying feeds. 

Each intervention was tested under the same conditions to see if it could disrupt the drift toward echo chambers. The results were shocking. None of the fixes worked well, and most made only small improvements — at best, no more than a six percent reduction in engagement with partisan accounts. 

In fact, in some cases, the changes backfired. Chronological feeds ended up pushing extreme content to the top, while hiding user bios gave even more attention to polarized voices. More importantly, even when an intervention improved one dysfunction, such as reducing attention inequality, it often worsened another, such as amplifying toxic content. 

We must fix the problems with social media

The study’s findings paint a troubling picture. They suggest that polarization, echo chambers, and toxic amplification may be baked into the very structure of social media, not just its recommendation algorithms. 

Our “findings challenge the common view that social media’s dysfunctions are primarily the result of algorithmic curation. Instead, these problems may be rooted in the very architecture of social media platforms that grow through emotionally reactive sharing,” the researchers added.

If such dysfunction emerges in a simple environment with only bots, posting, and following, then real-world platforms, with billions of human users and profit-driven recommendation engines, may be destined to exacerbate these problems even further.

In this case, improving online discourse will require more than technical tweaks. It may demand a fundamental redesign of how social media works, from how connections are formed to how attention is distributed. Otherwise, as generative AI floods platforms with even more content, the toxic polarization on social media could accelerate.

It is also important to note that “LLM-based agents, while offering rich representations of human behavior, function as black boxes and carry risks of embedded bias. The findings of this study should hence not be taken as definitive conclusions, but as a starting point for further inquiry,” the researchers added.

The study is published in the journal arXiv.

Tags: AIchatbotspolitical ideologiessocial media

ShareTweetShare
Rupendra Brahambhatt

Rupendra Brahambhatt

Rupendra Brahambhatt is an experienced journalist and filmmaker covering culture, science, and entertainment news for the past five years. With a background in Zoology and Communication, he has been actively working with some of the most innovative media agencies in different parts of the globe.

Related Posts

Environment

China Has Built the First Underwater AI Data Center Cooled by the Ocean Itself

byTudor Tarita
2 days ago
Economics

Can AI help us reduce hiring bias? It’s possible, but it needs healthy human values around it

byAlexandra Gerea
6 days ago
Economics

AI Visual Trickery Is Already Invading the Housing Market

byMihai Andrei
1 week ago
Future

GPT-5 is, uhm, not what we expected. Has AI just plateaued?

byMichael Rovatsos
2 weeks ago

Recent news

Spiders Are Trapping Fireflies in Their Webs and Using Their Glow to Lure Fresh Prey

August 28, 2025

A Single Mutation Made Horses Rideable and Changed Human History

August 28, 2025

Scientists Make Succulents That Glow in the Dark Like Living Night Lights

August 28, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.