homehome Home chatchat Notifications


New AI program creates realistic 'talking heads' from only an image and an audio

Anyone can now speak like Obama -- digitally.

Mihai Andrei
November 24, 2023 @ 3:29 pm

share Share

digitalizing facial emotions
Image generated by AI (not in the study).

The landscape of generative AI is ever-evolving — and in the past year, it’s really taken off. Seemingly overnight, we have AIs that can generate images or text with stunning ease. This new achievement ties right into that, taking it one step further. A team of researchers led by Assoc Prof Lu Shijian from the Nanyang Technological University (NTU) in Singapore has developed a computer program that creates realistic videos, reflecting the facial expressions and head movements of the person speaking

This concept, known as audio-driven talking face generation, has gained significant traction in both academic and industrial realms due to its vast potential applications in digital human visual dubbing, virtual reality, and beyond. The core challenge lies in creating facial animations that are not just technically accurate but also convey the subtle nuances of human expressions and head movements in sync with the spoken audio.

The problem is that humans have a lot of different facial movements and emotions, and capturing the entire spectrum is extremely difficult. But the new method seems to capture everything, including accurate lip movements, vivid facial expressions, and natural head poses – all derived from the same audio input.

Diverse yet realistic facial animations

A DIRFA-generated ‘talking head’ with just an audio of former US president Barrack Obama speaking, and a photo of Associate Professor Lu Shijian. Credit: Nanyang Technological University

The research paper in focus introduces DIRFA (Diverse yet Realistic Facial Animations). The team trained DIRFA on more than 1 million clips from 6,000 people generated with an open-source database. The engine doesn’t only focus on lip-syncing — it attempts to derive the entire range of facial movements and reactions.

First author Dr. Wu Rongliang, a Ph.D. graduate from NTU’s SCSE, said:

“Speech exhibits a multitude of variations. Individuals pronounce the same words differently in diverse contexts, encompassing variations in duration, amplitude, tone, and more. Furthermore, beyond its linguistic content, speech conveys rich information about the speaker’s emotional state and identity factors such as gender, age, ethnicity, and even personality traits.

Then, after being trained, DIRFA takes in a static portrait of a person and the audio and produces a 3D video showing the person speaking. It’s not perfectly smooth, but it is consistent in the facial animations.

“Our program also builds on previous studies and represents an advancement in the technology, as videos created with our program are complete with accurate lip movements, vivid facial expressions and natural head poses, using only their audio recordings and static images,” says Corresponding author Associate Professor Lu Shijian.

Why this matters

Far from being only a cool party trick (and potentially being used for disinformation by malicious actors), this technology has important and positive applications.

In healthcare, it promises to enhance the capabilities of virtual assistants and chatbots, making digital interactions more engaging and empathetic. More profoundly, it could serve as a transformative tool for individuals with speech or facial disabilities, offering them a new avenue to communicate their thoughts and emotions through expressive digital avatars.

While DIRFA opens up exciting possibilities, it also raises important ethical questions, particularly in the context of misinformation and digital authenticity. Addressing these concerns, the NTU team suggests incorporating safeguards like watermarks to indicate the synthetic nature of the videos — but if there’s anything the internet has taught us, is that there are ways around such safeguards.

It’s still early days for all AI technology. The potential for important societal impact is there, but so is the risk of misuse. As always, we should ensure that the digital world we are creating is safe, authentic, and beneficial for all.

The study was published in the journal Pattern Recognition.

share Share

Why You Should Stop Using Scented Candles—For Good

They're seriously not good for you.

People in Thailand were chewing psychoactive nuts 4,000 years ago. It's in their teeth

The teeth Chico, they never lie.

To Fight Invasive Pythons in the Everglades Scientists Turned to Robot Rabbits

Scientists are unleashing robo-rabbits to trick and trap giant invasive snakes

Lab-Grown Beef Now Has Real Muscle Fibers and It’s One Step Closer to Burgers With No Slaughter

In lab dishes, beef now grows thicker, stronger—and much more like the real thing.

From Pangolins to Aardvarks, Unrelated Mammals Have Evolved Into Ant-Eaters 12 Different Times

Ant-eating mammals evolved independently over a dozen times since the fall of the dinosaurs.

Potatoes were created by a plant "love affair" between tomatoes and a wild cousin

It was one happy natural accident.

Quakes on Mars Could Support Microbes Deep Beneath Its Surface

A new study finds that marsquakes may have doubled as grocery deliveries.

Scientists Discover Life Finds a Way in the Deepest, Darkest Trenches on Earth

These findings challenge what we thought we knew about life in the deep sea.

Solid-State Batteries Charge in 3 Minutes, Offer Nearly Double the Range, and Never Catch Fire. So Why Aren't They In Your Phones and Cars Yet?

Solid state are miles ahead lithium-ion, but several breakthroughs are still needed before mass adoption.

What if the Secret to Sustainable Cities Was Buried in Roman Cement?

Is Roman concrete more sustainable? It's complicated.