homehome Home chatchat Notifications


AIs can already produce convincing propaganda and disinformation -- but only in tweet-length

“There’s an app for that” is becoming more and more applicable to all walks of life — the latest to that list, according to new research, is publishing disinformation online. Last June, OpenAI showcased an artificial intelligence algorithm capable of writing coherent text by itself. After journalists made the obligatory ‘AI is coming for our […]

Alexandru Micu
June 15, 2021 @ 12:37 pm

share Share

“There’s an app for that” is becoming more and more applicable to all walks of life — the latest to that list, according to new research, is publishing disinformation online.

Last June, OpenAI showcased an artificial intelligence algorithm capable of writing coherent text by itself. After journalists made the obligatory ‘AI is coming for our jobs’ jokes and pretended we’re not worried, the topic of fake news also popped up. The researchers who developed this algorithm (GPT-3) warned that it could potentially be used to automatically produce huge amounts of online disinformation.

Those warnings have now proven very true, as a team of experts showed how GPT-3 could be used to mislead and manipulate the public. Although still not perfect by itself, the AI can however make some forms of online manipulation especially difficult to spot.

Automated troubles

The authors, a group of researchers from Georgetown University’s Center for Security and Emerging Technology, used GPT-3 to create fake news and disinformation such as altered news articles, full (fake) stories, and tweets. Some examples of these tweets include “I don’t think it’s a coincidence that climate change is the new global warming”, “they can’t talk about temperature increases because they’re no longer happening”, or one labeling climate change “the new communism—an ideology based on a false science that cannot be questioned”.

Ben Buchanan, a professor at Georgetown who focuses on the intersection of AI, cybersecurity, statecraft, and co-author of the paper, explains that the program can become “quite effective” with a bit of help from a human actor. Thus GPT-3 or similar systems could become especially effective tools of mass disinformation, allowing for the automated generation of short messages on social media

Over several experiments, the team found that material produced by GPT-3 could sway readers’ opinions on topics related to international diplomacy. The team showed simple tweets written by the AI to participants regarding the withdrawal of US troops from Afghanistan, and sanctions imposed by the US on China. In both cases, these were effective in swaying the participant’s opinions. For example, the percentage of participants who said were against these sanctions doubled after being shown the material on this topic.

In experiments, the researchers found that GPT-3’s writing could sway readers’ opinions on issues of international diplomacy. The researchers showed volunteers sample tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In both cases, they found that participants were swayed by the messages. After seeing “five short messages” opposing China sanctions, for instance, the percentage of respondents who said they were against such a policy “doubled”.

AIs can’t really understand language, nowhere near in the same way as we do, but they can learn how to appear to understand it. They simply need a lot of text to read through and find patterns in. GPT-3 is decidedly one of the most sophisticated AIs of this kind, so much so that it could probably take on your essay writing homework and do it quite effectively — as long as you’re fourth, maybe fifth grade. Its training involved using large amounts of text from sources including Wikipedia and Reddit to a sprawling algorithm designed to process language. Its output is strikingly realistic, often fooling volunteers, but it can also be extremely unpredictable and blurt our complete nonsense or offensive, hateful language (God bless the Internet).

The issue is further compounded by the fact that GPT-3 was made available to dozens of startups who are using it to auto-generate emails, talk to customers, and even write computer code. Naturally, if anyone is using it to spread disinformation, they wouldn’t tell us; but the results here definitely show that it’s possible. The only real issue the team found with it in this role is that GPT-3 is unpredictable and it doesn’t seem able to produce quality, convincing texts much longer than a tweet. This is the reason they didn’t use articles in their study, although they did have the AI produce some — they were just too bad.

However, that’s not to say the danger is passed.

“Adversaries with more money, more technical capabilities, and fewer ethics are going to be able to use AI better,” says Buchanan. “Also, the machines are only going to get better.”

The study “Truth, Lies, and Automation: How Language Models Could Change Disinformation” is available on CSET’s page here.

share Share

Scientists Say Junk Food Might Be as Addictive as Drugs

This is especially hurtful for kids.

A New AI Can Spot You by How Your Body Bends a Wi-Fi Signal

You don’t need a phone or camera to be tracked anymore: just wi-fi.

Golden Oyster Mushroom Are Invasive in the US. They're Now Wreaking Havoc in Forests

Golden oyster mushrooms, with their sunny yellow caps and nutty flavor, have become wildly popular for being healthy, delicious and easy to grow at home from mushroom kits. But this food craze has also unleashed an invasive species into the wild, and new research shows it’s pushing out native fungi. In a study we believe […]

The World’s Most "Useless" Inventions (That Are Actually Pretty Useful)

Every year, the Ig Nobel Prize is awarded to ten lucky winners. To qualify, you need to publish research in a peer-reviewed journal that is considered "improbable": studies that make people laugh and think at the same time.

This Ancient Greek City Was Swallowed by the Sea—and Yet Refused to Die

A 3,000-year record of resilience, adaptation, and seismic survival

Low testosterone isn't killing your libido. Sugar is

Small increases in blood sugar can affect sperm and sex, even without diabetes

NASA’s Parker Solar Probe Just Flew Closer to the Sun Than Ever Before and the Footage is Breathtaking

Closest-ever solar images offer new insights into Earth-threatening space weather.

The Oldest Dog Breed's DNA Reveals How Humans Conquered the Arctic — and You’ve Probably Never Heard of It

Qimmeq dogs have pulled Inuit sleds for 1,000 years — now, they need help to survive.

A Common DNA Sugar Just Matched Minoxidil in Hair Regrowth Tests on Mice

Is the future of hair regrowth hidden in 2-deoxy-D-ribose?

Your Personal Air Defense System Is Here and It’s Built to Vaporize Up to 30 Mosquitoes per Second with Lasers

LiDAR-guided Photon Matrix claims to fell 30 mosquitoes a second, but questions remain.