Casey Harrell spent years as a climate activist, speaking out at public events and policy meetings. Then he lost his voice to ALS (amyotrophic lateral sclerosis). The disease weakened the muscles in his mouth and throat until he could no longer form clear words. Now, a brain implant is helping him speak again—in his own voice, with real-time emotion and tone.
The device, tested by researchers at the University of California, Davis, can translate brain signals into speech almost instantly. It’s part of a growing field known as brain–computer interfaces, or BCIs. But what sets this one apart is its ability to capture the natural rhythm of speech, not just the words, but also the pitch, stress, and pauses that make language feel human.
The system, described this week in Nature, uses artificial intelligence to recreate speech directly from the brain’s motor signals. It works fast enough for back-and-forth conversation and even allows users to sing short melodies. For people who have lost their ability to speak clearly, it marks a major step forward.

Real-Time, Real Emotion
Harrell, now 47, was diagnosed with amyotrophic lateral sclerosis (ALS) five years ago. The neurodegenerative disease gradually eroded the neural connections that allowed him to control the muscles in his lips, tongue, and throat. Though he could still vocalize, his speech had become unintelligible.
In an earlier trial, Harrell had an array of 256 electrodes implanted into the part of his brain responsible for movement. These tiny silicon sensors, each just 1.5 millimeters long, picked up the electrical activity from thousands of neurons. In the new study, researchers paired that data stream with a deep-learning model that turned his brain signals into audible words at a speed nearly indistinguishable from natural speech.
The synthetic voice doesn’t rely on preprogrammed words or phrases. It decodes sound, including interjections like “hmm” or nonsense words never seen by the algorithm. And it can change intonation mid-sentence.
“We are bringing in all these different elements of human speech which are really important,” said lead researcher Maitreyee Wairagkar. “We don’t always use words to communicate what we want.”
That nuance is key. In one test, Harrell used the system to speak a sentence as both a statement and a question. The software adjusted the pitch automatically. In another, he sang a string of musical notes in three tones. What emerged from the speaker was unmistakably his voice, reconstructed from recordings he made before ALS stole it.
A Game Changer
Most earlier speech BCIs worked in fits and starts. They could only translate complete sentences once a person had finished miming the whole phrase. Some took as long as three seconds to respond, which is far too slow for conversation.
“You can’t interrupt people, you can’t make objections, you can’t sing,” Sergey Stavisky, a UC Davis neuroscientist and co-author of the study, told Science, describing previous devices. “This changes that.” The new system speaks back within 25 milliseconds—about the time it takes for a person’s voice to reach their own ears.
Volunteers listening to Harrell’s synthetic voice understood 60% of what he said—compared to just 4% when he tried speaking unaided. It’s not yet perfect. His text-based BCI, which uses large language models to interpret each word after he speaks it, is still more accurate at around 98%. But it’s also slower and more rigid.
“This is the holy grail in speech BCIs,” Christian Herff, a computational neuroscientist at Maastricht University, told the Scientific American. “This is now real, spontaneous, continuous speech.”
Perhaps most intriguingly, the system decodes not from phonemes or dictionary entries, but directly from sound waves. That opens the door for multilingual support—potentially even tone-based languages like Mandarin—and for preserving unique accents.
What’s Next?
The study’s success comes with caveats. Harrell’s ALS has not yet degraded his motor cortex beyond function. It remains to be seen whether the same system would work in patients with different neurological damage, such as stroke.
That’s what researchers plan to find out next. A new clinical trial led by UC Davis’s David Brandman will test implants with even more electrodes—up to 1,600—in people with a range of speech impairments.
The goal, Wairagkar says, is not just to restore a voice. It’s to restore the full human experience of conversation: spontaneity, emotion, identity.
“This is a bit of a paradigm shift,” said Silvia Marchesotti, a neuroengineer at the University of Geneva. “It can really lead to a real-life tool.”
For Harrell, it already is.