ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Science → News

Is AI Moderation a Useful Tool or Another Failed Social Media Fix?

A new study suggests that an optimized AI model could detect harmful social media comments with 87% accuracy.

Mihai AndreibyMihai Andrei
March 19, 2025
in Mind & Brain, News
A A
Edited and reviewed by Zoe Gordon
Share on FacebookShare on TwitterSubmit to Reddit
AI-generated image.

Social media promised to connect the world more than ever before. But in the meantime, it’s turned into a gargantuan double edged sword. Sure, on one hand, it allows us to keep up with friends and interests easier than ever before. But it’s also quickly becoming a massive source of disinformation and mental health problems.

From hate speech to cyberbullying, harmful online discourse is on the rise. It’s also extremely difficult to stop. Due to the sheer volume of content, any efforts to curb this issue turn into a game of whack-a-mole. By the time you stop one toxic profile, three more pop up. Then among all this, there’s also AI.

AI enters social media

AI makes it easier than ever to create content, whether that’s helpful or toxic content. In a new study, however, researchers showed that AI can also help address this problem. The new algorithm is 87% accurate in classifying toxic and non-toxic text without relying on manual identification.

Researchers from East West University, Bangladesh, and the University of South Australia developed an optimized Support Vector Machine (SVM) model to detect toxic comments in both Bangla (a language spoken by over 250 million people) and English. Each model was trained using 9,061 comments collected from Facebook, YouTube, WhatsApp, and Instagram. The dataset included 4,538 comments in Bangla and the rest in English.

SVMs have been used before to categorize social media content. Although the process is typically fast and relatively straightforward, it’s not accurate enough. In this case, however, the SVM was almost 70% accurate in categorizing comments. The researchers then developed another type of classifier, called Stochastic Gradient Descent (SGD). This was more accurate, reaching around 80% accuracy, but it also flagged harmless comments as toxic. It was also much slower than the SVM.

Then, the researchers fine-tuned and mixed these models into a single one, which they call an optimized SVM. This model was fast and had an accuracy of 87%

“Our optimized SVM model was the most reliable and effective among all three, making it the preferred choice for deployment in real-world scenarios where accurate classification of toxic comments is critical,” says Abdullahi Chowdhury, study author and AI researcher at the University of South Africa.

RelatedPosts

Google’s AlphaZero surpassed the sum of human chess knowledge — in 4 hours
AI Experts Predict Machines Could Outthink Humans by 2040. But Some Are Betting on Even Sooner
Better than Photoshop: AI synthesizes and edits complex images from a text description — and they’re mind-bogglingly good
How neuro-symbolic AI might finally make machines reason like humans

It’s useful, but not perfect

Image credits: Thomas Park.

Toxicity in social media is a growing issue. We’re drowning in a sea of negativity and mistrust, and AI can be both a solution and a problem. It is, much like social media itself, a double-edged sword.

The model seems to work just as fine in different languages, so it could be used to tackle global online toxicity. Social media companies have repeatedly shown that they are unwilling or unable to truly tackle this issue.

“Despite efforts by social media platforms to limit toxic content, manually identifying harmful comments is impractical due to the sheer volume of online interactions, with 5.56 billion internet users in the world today,” she says. “Removing toxic comments from online network platforms is vital to curbing the escalating abuse and ensuring respectful interactions in the social media space.”

More advanced AI techniques like deep learning could improve accuracy even further. While more research is needed, this could enable real-time deployment in social media platforms, essentially flagging harmful comments.

Could AI moderation truly be the solution to social media toxicity, or is it just another pseudo-techno-fix destined to backfire? The answer isn’t straightforward. AI has already shown immense potential in automating tasks, detecting patterns, and filtering harmful content faster than any human moderation team could. However, past attempts at AI-driven moderation have been far from perfect.

Moreover, AI lacks the nuance of human judgment. A sarcastic joke, a political discussion, or a cultural reference can easily be misclassified as toxic. At the same time, genuinely harmful comments can sometimes slip through the cracks, either because the AI was not trained on a diverse enough dataset or because bad actors find ways to game the system.

The real challenge isn’t just building better AI — it’s ensuring that these systems serve the public good rather than becoming another layer of digital dysfunction.

Journal Reference: Afia Ahsan et al, Unmasking Harmful Comments: An Approach to Text Toxicity Classification Using Machine Learning in Native Language, 2024 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) (2025). DOI: 10.1109/3ict64318.2024.10824367

Tags: AI moderationartificial intelligencecyberbullyingdeep learningfacebookhate speechmachine learningmisinformationonline discoursesocial media toxicitytoxic comments

ShareTweetShare
Mihai Andrei

Mihai Andrei

Dr. Andrei Mihai is a geophysicist and founder of ZME Science. He has a Ph.D. in geophysics and archaeology and has completed courses from prestigious universities (with programs ranging from climate and astronomy to chemistry and geology). He is passionate about making research more accessible to everyone and communicating news and features to a broad audience.

Related Posts

Health

3D-Printed Pen With Magnetic Ink Can Detect Parkinson’s From Handwriting

byTibi Puiu
1 week ago
Future

Can you upload a human mind into a computer? Here’s what a neuroscientist has to say about it

byDobromir Rahnev
2 weeks ago
Future

Grok Won’t Shut Up About “White Genocide” Conspiracy Theories — Even When Asked About HBO or Other Random Things

byMihai Andrei
4 weeks ago
AI-generated image.
Future

Does AI Have Free Will? This Philosopher Thinks So

byMihai Andrei
1 month ago

Recent news

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

June 13, 2025

Meet the Dragon Prince: The Closest Known Ancestor to T-Rex

June 13, 2025

Your Breathing Is Unique and Can Be Used to ID You Like a Fingerprint

June 13, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.