Imagine waking up one day to discover that everyone — including your family members, friends, neighbors, and even some strangers — is mocking you. The reason? The previous night, they all saw a video of you proclaiming yourself to be the reincarnation of Jesus.
You wouldn’t make such a video, right? Well, you won’t need to. There are people out there who can make such a video without even filming you, using AI deepfake tools.
For now, you aren’t a likely target unless you happen to be a celebrity. But it’s already happening. Recently, film star Tom Hanks posted on Instagram about a deepfake video of him promoting a dental plan.
“BEWARE!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” Tom Hanks said on Instagram.
This is far from the first time a celebrity deepfake has been used to scam people online.
Deepfake videos are all over the internet
A deepfake is a synthetic media, often a video or audio clip, created using artificial intelligence (AI) techniques to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes are generated by training machine learning algorithms on a large dataset of images or audio clips to learn the unique features and movements of a person’s face or voice. Once trained, these algorithms can then generate highly realistic and convincing fake content.
Any visual or audio content that features doctored voices, images, or videos of real people created through deep machine learning algorithms is called a deepfake. People create such videos either for fun, to gain more views, or to scam others.
In 2017, a Reddit user named “deepfakes” began posting pornographic content that involved celebrities’ faces superimposed onto the bodies of people in the explicit videos. They were one of the earliest examples of deepfake.
However, deepfakes started gaining popularity in 2018 when some social media users started posting videos with swapped faces of actors in different movie scenes. Another notable early example of a deepfake was a video in which former U.S. President Barack Obama’s face was digitally manipulated to say things that he never said.
Filmmaker Jordan Peele released this video in April 2018 to raise awareness about the potential misuse of deepfake technology. Since then AI technology has become much better, and deepfakes have become more convincing and dangerous than ever.
For example, similar to Tom Hanks, CBS anchor Gayle King, and YouTube star Mr. Beast also recently posted warnings on their social media channels about deepfake videos in which their voices and faces are being used to sell phony products.
“Lots of people are getting this deepfake scam ad of me. Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem,” Mr. Beast said on Twitter.
How the heck are deepfake videos even made?
Creating a deepfake video involves a multi-step process driven by artificial intelligence. It begins with collecting a substantial dataset (collection of data points) of the target person’s face and also the dataset of the person whose face is to be replaced.
This can be done using videos and images that are already available on a target person’s social media. The collected data is then processed using facial recognition algorithms to align facial features, such as eyes, nose, and mouth, to ensure they match as closely as possible between the source and target.
Next, deep learning-based Generative Adversarial Networks (GANs), are used to train two types of neural network models; the first type called the generator creates the deepfake, and the second, the discriminator spots any differences between the real and the created version.
This process continues until the generated content closely resembles the target. After this, the deepfake is further refined using lighting and color-grading software. The video part is done here and next comes the voice, for which audio synthesis software is used to match the lip movements with synthesized speech. Finally, the video is ready.
Deepfakes are possibly the most dangerous AI technology
Unfortunately, deepfake scams are not limited to just selling cheap products. In March 2019, an executive of a U.K.-based energy firm received a deepfake call that he believed was from his boss, the CEO of their German parent company, and authorized a $243,000 transfer to a supplier in Hungary.
The money never reached Hungary and instead went into multiple bank accounts in Mexico, and this is just one of the many horrible things that scammers can do using this technology.
A study published in the year 2020 suggests that deepfakes may emerge as the biggest security challenge among all AI-based technologies in the next 15 years.
“Humans have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a great deal of credence, despite the long history of photographic trickery. But recent developments in deep learning have significantly increased the scope for the generation of fake content,” the study authors note.
They believe deepfake technology could be used for various criminal purposes. For example, scammers can pretend to be children when talking to elderly persons over video calls to trick them into sending money. They can make fake phone calls to gain access to secure systems or create fabricated videos of politicians to manipulate public opinion.
However, not everything about the deepfake technology is bad and evil. Similar to any other AI application, it also has its own pros and there are many ways in which it can benefit humans.
For instance, some health experts are using deepfake technology to treat patients dealing with visualization disorders such as Aphantasia. There are also educators who use deepfakes to improve the quality of online classes.
Apart from this, deepfakes also have the potential to do wonders for mental health patients and improve the experience of visitors in museums and historical places.
However, in the wrong hands, this technology does pose a serious threat. Therefore, the time has come for social media companies, policymakers, and AI experts to work together and devise ways to prevent the misuse of deepfake technology.