homehome Home chatchat Notifications


Meta's new AI can read your mind and type your thoughts with startling accuracy

Look like Mr. Zuckerberg is secretly working on something that could work as an alternative to invasive brain chips.

Rupendra Brahambhatt
March 25, 2025 @ 12:20 pm

share Share

Credit: Grok AI-generated image.

Invasive brain chips aren’t the only way to help patients with brain damage regain their ability to speak and communicate. A team of scientists at Meta has created an AI model that can understand what a person is thinking, and convert their thoughts into typed sentences. 

The AI also sheds light on how the human brain conveys thoughts in the form of language. The researchers suggest their model represents the first and crucial step toward developing noninvasive brain-computer interfaces (BCIs)

“Modern neuroprostheses can now restore communication in patients who have lost the ability to speak or move. However, these invasive devices entail risks inherent to neurosurgery. Here, we introduce a non-invasive method to decode the production of sentences from brain activity,” the researchers note.

To demonstrate the capabilities of their AI system, the Meta team conducted two separate studies. Here’s how their system performed.

Turning brain signals into words

Image credits: Ron Lach/Pexels

The first study involved 35 participants who first observed some letters appearing on a screen followed by a cue telling them to type the sentence the letters formed from memory. The researchers used magnetoencephalography (MEG) to map the magnetic signals generated by the participants’ brains while they focused on turning their thoughts into typed sentences. 

Next, they trained an AI model using the MEG data. Again a test was conducted, this time the AI model (called Brain2Qwerty) had to predict and type the sentences forming in participants’ minds as they read letters on a screen. Finally, researchers compared the output of the model to the actual sentences typed by the participants. 

Brain2Qwerty was 68% accurate in predicting letters the participants typed. It mostly struggled with sentences involving letters such as K and Z. However, when errors occurred, it guessed letters that were near the correct one on a QWERTY keyboard. This indicates that the model could also detect motor signals in the brain and predict what a participant typed. 

In the second study, researchers examined how the brain forms language while typing. They collected 1,000 brain activity snapshots per second. Next, they used these snapshots to map how the brain built a sentence. They found that the brain keeps words and letters separate using a dynamic neural code that shifts how and where information is stored. 

This code prevents overlap and helps maintain sentence structure while linking letters, syllables, and words smoothly. Think of it like moving information around in the brain so that each letter or word has its own space, even if they are processed at the same time. 

“This approach confirms the hierarchical predictions of linguistic theories: the neural activity preceding the production of each word is marked by the sequential rise and fall of context-, word-, syllable-, and letter-level representations,” the study authors note.

This way, the brain can keep track of each letter without mixing them up, ensuring smooth and accurate typing or speech. The researchers compare this to a technique in artificial intelligence called positional embedding, which helps AI models understand the order of words.

“Overall, these findings provide a precise computational breakdown of the neural dynamics that coordinate the production of language in the human brain,” they added.

Brain2Qwerty has some limitations 

While Meta’s AI model can decode human thoughts with exceptional accuracy, there’s still a lot of work that needs to be done to make it practical. For instance, currently, the AI model only works in a controlled lab environment and requires a cumbersome setup. 

Turning it into a practical noninvasive BCI that could be used for healthcare and other purposes seems quite challenging at this stage. Moreover, the current studies involved only 35 subjects.

It would be interesting to see if the Meta team could overcome these challenges before its rivals come up with a better thought-to-text AI system. 

Note: Both studies are yet to be peer-reviewed. You can read them here and here.

share Share

The Universe’s First “Little Red Dots” May Be a New Kind of Star With a Black Hole Inside

Mysterious red dots may be a peculiar cosmic hybrid between a star and a black hole.

Peacock Feathers Can Turn Into Biological Lasers and Scientists Are Amazed

Peacock tail feathers infused with dye emit laser light under pulsed illumination.

Helsinki went a full year without a traffic death. How did they do it?

Nordic capitals keep showing how we can eliminate traffic fatalities.

Scientists Find Hidden Clues in The Alexander Mosaic. Its 2 Million Tiny Stones Came From All Over the Ancient World

One of the most famous artworks of the ancient world reads almost like a map of the Roman Empire's power.

Ancient bling: Romans May Have Worn a 450-Million-Year-Old Sea Fossil as a Pendant

Before fossils were science, they were symbols of magic, mystery, and power.

This AI Therapy App Told a Suicidal User How to Die While Trying to Mimic Empathy

You really shouldn't use a chatbot for therapy.

This New Coating Repels Oil Like Teflon Without the Nasty PFAs

An ultra-thin coating mimics Teflon’s performance—minus most of its toxicity.

Why You Should Stop Using Scented Candles—For Good

They're seriously not good for you.

People in Thailand were chewing psychoactive nuts 4,000 years ago. It's in their teeth

The teeth Chico, they never lie.

To Fight Invasive Pythons in the Everglades Scientists Turned to Robot Rabbits

Scientists are unleashing robo-rabbits to trick and trap giant invasive snakes