You’re browsing through Facebook’s newsfeed when you come across a recording of Mark Zuckerberg, none other than the social network’s founder, giving an outrageous speech on national television. “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” Zuckerberg says in the video. “I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”
Except it wasn’t Zuckerberg that said any of that. It’s all a hoax. It’s part of a recent genre of AI-driven technology called ‘deepfakes’. This particular video, which was uploaded to Instagram, was produced by a partnership between a pair of artists and the advertising company Canny. While meant as a demonstration, similar deepfakes can be a lot more sinister and damaging.
Recently, a deepfake video showing House Speaker Nancy Pelosi made rounds on social media causing an uproar. Instead of removing the fake video, Facebook chose to de-prioritize it, meaning it showed up less frequently in users’ feeds. It also showed third-party fact-checker information just like you’d see in a fake news story shared over Facebook.
But will the social network take a more radical stand once it sees that its brand can be directly affected by such impersonations? During the time when the Pelosi fake surfaced on the platform, Neil Potts, Facebook’s director of public policy, said that this would make no difference. Now that this hypothetical has turned into reality, it remains to be seen how Facebook will react.
Pentus-Rosimannus: If instead of Pelosi, a doctored video of Mark Zuckerberg was slowed down to make him seem impaired, would you leave it up?
— Alex Boutilier (@alexboutilier) May 28, 2019
Deepfake is seriously creepy as well as dangerous. It’s been used to attribute fake content to politicians like Barrack Obama and Vladimir Putin, or to swap Nicolas Cage’s face with those of characters from famous movies where he never stared in. Similar tech was used to swap the faces of celebrities in porn.
It’s been getting ridiculously easy to create these, too. Previously, researchers made deepfakes starting from a single image or painting, thus bringing to life portraits of Einstein, Dali, and even the Mona Lisa. Elsewhere, a recent collaboration between Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research wrote a new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.
The Zuckerberg video was created like just about any other deepfake. It employed an algorithm developed at the University of Washington by the same people that made the Obama fake videos. Canny also sprinkled in some code inspired by Stanford’s Face2Face program that enables real-time facial matching. The algorithm was then trained on short scenes featuring the target face lasting no more than 45 seconds. A voice actor’s recording was then used to reconstruct frames in the fake video showing Zuckerberg making statements he never actually voiced.
If you pay close attention, it’s relatively easy to spot these as fakes. The voices are particularly unnatural and synthetic, but with some improvements, they could become indistinguishable from those of real people. Just a few weeks ago, someone made an AI that sounds just like Joe Rogan. Seriously, listen to the production and be prepared for one heck of a trip.
In the years to come, deepfakes will only get better and easier to make. If you thought fake news was bad, wait until you see this ungodly technology released into the wild.