Roughly half of the 200 million tweets related to the virus published since January were sent by accounts that appear to be bots. They seem to have a particular interest in the conversation about ‘reopening America’ and are dominating the discourse on this topic.
Scrolling through your Twitter feed, you might ignore most of what’s going on and focus only on what draws your eye. But even if you’d pay attention to every single story, you’d likely not be to tell which were posted by a bot, and which by an actual person.
Researchers use a multitude of methods to tell whether posts come from humans or artificial accounts, and some of these methods rely on artificial intelligence. in general, however, researchers look at factors such as the number of followers, when an account was created, how often they tweet, and at what hours. Sometimes, things line up too perfectly: new accounts, with similar follower profiles, posting at similar times, about the same hashtags. Other times, the tells are even clearer.
“Tweeting more frequently than is humanly possible or appearing to be in one country and then another a few hours later is indicative of a bot,” said Kathleen Carley, a professor of computer science at Carnegie Mellon University. Carley is conducting a study into bot-generated coronavirus activity on Twitter that has yet to be published.
Carley and colleagues collected more than 200 million tweets discussing the coronavirus or COVID-19 pandemic. They found that 82% of the top 50 influential retweeters on these topics are bots. Out of the top 1,000, 62% are bots.
These bots also seem to not be acting randomly. Instead, the stories they propagate seem to have the aim of polarizing public discourse.
“We do know that it looks like it’s a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that,” she adds.
Furthermore, bot activity seems to be two times more intense than what researchers would expect based on previous natural disasters, further supporting the idea that this is a deliberate campaign.
While finding a smoking gun will be extremely difficult, researchers are fairly confident that this is an active campaign and not just random bot activity.
That conspiracy theory you read? It could be fueled by a bot
The team found 100 types of inaccurate COVID-19 stories propagated by Twitter bots, ranging from unproven cures to conspiracy theories about hospitals being filled with mannequins, or 5G fearmongering.
These actions have already had tangible real-life consequences. For instance, several 5G towers in England have been destroyed by vandals as members of the public fell victim to conspiracy theories spread on social media.
But the larger stake is shifting public discourse and polarizing people. A good example for this is the ‘reopening America’ discussion.
Researchers found strong indicators that this discussion is orchestrated by bot activity. Accounts that are definitely bots generate 34% of all tweets about this topic, and accounts that seem to be either bots or humans with bot assistants produce over 60% of the tweets.
“When we see a whole bunch of tweets at the same time or back to back, it’s like they’re timed,” Carley said. “We also look for use of the same exact hashtag, or messaging that appears to be copied and pasted from one bot to the next.”
“Increased polarization will have a variety of real-world consequences, and play out in things like voting behavior and hostility towards ethnic groups,” Carley said.
What you can do
While the researchers have not found any indication of who might be behind these bots, they say it’s important for all of us to be vigilant with what we read on social media — and especially what we share forward.
We are the gatekeepers of our own social media bubble, and it pays to double-check everything against a reliable source. Even if someone appeals to your bias and says exactly what you want to hear, don’t just buy into it. This has never been more important.
In addition, researchers say we should be particularly careful with accounts we don’t know personally. Most users have long surpassed the point where they are social media friends only with their real-life acquaintances and follow a variety of accounts and pages. Many might be malevolent.
“Even if someone appears to be from your community, if you don’t know them personally, take a closer look, and always go to authoritative or trusted sources for information,” Carley said. “Just be very vigilant.”
Was this helpful?