homehome Home chatchat Notifications


Meta's new AI can read your mind and type your thoughts with startling accuracy

Look like Mr. Zuckerberg is secretly working on something that could work as an alternative to invasive brain chips.

Rupendra Brahambhatt
March 25, 2025 @ 12:20 pm

share Share

Credit: Grok AI-generated image.

Invasive brain chips aren’t the only way to help patients with brain damage regain their ability to speak and communicate. A team of scientists at Meta has created an AI model that can understand what a person is thinking, and convert their thoughts into typed sentences. 

The AI also sheds light on how the human brain conveys thoughts in the form of language. The researchers suggest their model represents the first and crucial step toward developing noninvasive brain-computer interfaces (BCIs)

“Modern neuroprostheses can now restore communication in patients who have lost the ability to speak or move. However, these invasive devices entail risks inherent to neurosurgery. Here, we introduce a non-invasive method to decode the production of sentences from brain activity,” the researchers note.

To demonstrate the capabilities of their AI system, the Meta team conducted two separate studies. Here’s how their system performed.

Turning brain signals into words

Image credits: Ron Lach/Pexels

The first study involved 35 participants who first observed some letters appearing on a screen followed by a cue telling them to type the sentence the letters formed from memory. The researchers used magnetoencephalography (MEG) to map the magnetic signals generated by the participants’ brains while they focused on turning their thoughts into typed sentences. 

Next, they trained an AI model using the MEG data. Again a test was conducted, this time the AI model (called Brain2Qwerty) had to predict and type the sentences forming in participants’ minds as they read letters on a screen. Finally, researchers compared the output of the model to the actual sentences typed by the participants. 

Brain2Qwerty was 68% accurate in predicting letters the participants typed. It mostly struggled with sentences involving letters such as K and Z. However, when errors occurred, it guessed letters that were near the correct one on a QWERTY keyboard. This indicates that the model could also detect motor signals in the brain and predict what a participant typed. 

In the second study, researchers examined how the brain forms language while typing. They collected 1,000 brain activity snapshots per second. Next, they used these snapshots to map how the brain built a sentence. They found that the brain keeps words and letters separate using a dynamic neural code that shifts how and where information is stored. 

This code prevents overlap and helps maintain sentence structure while linking letters, syllables, and words smoothly. Think of it like moving information around in the brain so that each letter or word has its own space, even if they are processed at the same time. 

“This approach confirms the hierarchical predictions of linguistic theories: the neural activity preceding the production of each word is marked by the sequential rise and fall of context-, word-, syllable-, and letter-level representations,” the study authors note.

This way, the brain can keep track of each letter without mixing them up, ensuring smooth and accurate typing or speech. The researchers compare this to a technique in artificial intelligence called positional embedding, which helps AI models understand the order of words.

“Overall, these findings provide a precise computational breakdown of the neural dynamics that coordinate the production of language in the human brain,” they added.

Brain2Qwerty has some limitations 

While Meta’s AI model can decode human thoughts with exceptional accuracy, there’s still a lot of work that needs to be done to make it practical. For instance, currently, the AI model only works in a controlled lab environment and requires a cumbersome setup. 

Turning it into a practical noninvasive BCI that could be used for healthcare and other purposes seems quite challenging at this stage. Moreover, the current studies involved only 35 subjects.

It would be interesting to see if the Meta team could overcome these challenges before its rivals come up with a better thought-to-text AI system. 

Note: Both studies are yet to be peer-reviewed. You can read them here and here.

share Share

This Plastic Dissolves in Seawater and Leaves Behind Zero Microplastics

Japanese scientists unveil a material that dissolves in hours in contact with salt, leaving no trace behind.

Women Rate Women’s Looks Higher Than Even Men

Across cultures, both sexes find female faces more attractive—especially women.

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

A digital mask restores a 15th-century painting in just hours — not centuries.

Meet the Dragon Prince: The Closest Known Ancestor to T-Rex

This nimble dinosaur may have sparked the evolution of one of the deadliest predators on Earth.

Your Breathing Is Unique and Can Be Used to ID You Like a Fingerprint

Your breath can tell a lot more about you that you thought.

In the UK, robotic surgery will become the default for small surgeries

In a decade, the country expects 90% of all keyhole surgeries to include robots.

Bioengineered tooth "grows" in the gum and fuses with existing nerves to mimic the real thing

Implants have come a long way. But we can do even better.

The Real Singularity: AI Memes Are Now Funnier, On Average, Than Human Ones

People still make the funniest memes but AI is catching up fast.

Scientists Turn Timber Into SuperWood: 50% Stronger Than Steel and 90% More Environmentally Friendly

This isn’t your average timber.

A Massive Particle Blasted Through Earth and Scientists Think It Might Be The First Detection of Dark Matter

A deep-sea telescope may have just caught dark matter in action for the first time.