homehome Home chatchat Notifications


Meta's new AI can read your mind and type your thoughts with startling accuracy

Look like Mr. Zuckerberg is secretly working on something that could work as an alternative to invasive brain chips.

Rupendra Brahambhatt
March 25, 2025 @ 12:20 pm

share Share

Credit: Grok AI-generated image.

Invasive brain chips aren’t the only way to help patients with brain damage regain their ability to speak and communicate. A team of scientists at Meta has created an AI model that can understand what a person is thinking, and convert their thoughts into typed sentences. 

The AI also sheds light on how the human brain conveys thoughts in the form of language. The researchers suggest their model represents the first and crucial step toward developing noninvasive brain-computer interfaces (BCIs)

“Modern neuroprostheses can now restore communication in patients who have lost the ability to speak or move. However, these invasive devices entail risks inherent to neurosurgery. Here, we introduce a non-invasive method to decode the production of sentences from brain activity,” the researchers note.

To demonstrate the capabilities of their AI system, the Meta team conducted two separate studies. Here’s how their system performed.

Turning brain signals into words

Image credits: Ron Lach/Pexels

The first study involved 35 participants who first observed some letters appearing on a screen followed by a cue telling them to type the sentence the letters formed from memory. The researchers used magnetoencephalography (MEG) to map the magnetic signals generated by the participants’ brains while they focused on turning their thoughts into typed sentences. 

Next, they trained an AI model using the MEG data. Again a test was conducted, this time the AI model (called Brain2Qwerty) had to predict and type the sentences forming in participants’ minds as they read letters on a screen. Finally, researchers compared the output of the model to the actual sentences typed by the participants. 

Brain2Qwerty was 68% accurate in predicting letters the participants typed. It mostly struggled with sentences involving letters such as K and Z. However, when errors occurred, it guessed letters that were near the correct one on a QWERTY keyboard. This indicates that the model could also detect motor signals in the brain and predict what a participant typed. 

In the second study, researchers examined how the brain forms language while typing. They collected 1,000 brain activity snapshots per second. Next, they used these snapshots to map how the brain built a sentence. They found that the brain keeps words and letters separate using a dynamic neural code that shifts how and where information is stored. 

This code prevents overlap and helps maintain sentence structure while linking letters, syllables, and words smoothly. Think of it like moving information around in the brain so that each letter or word has its own space, even if they are processed at the same time. 

“This approach confirms the hierarchical predictions of linguistic theories: the neural activity preceding the production of each word is marked by the sequential rise and fall of context-, word-, syllable-, and letter-level representations,” the study authors note.

This way, the brain can keep track of each letter without mixing them up, ensuring smooth and accurate typing or speech. The researchers compare this to a technique in artificial intelligence called positional embedding, which helps AI models understand the order of words.

“Overall, these findings provide a precise computational breakdown of the neural dynamics that coordinate the production of language in the human brain,” they added.

Brain2Qwerty has some limitations 

While Meta’s AI model can decode human thoughts with exceptional accuracy, there’s still a lot of work that needs to be done to make it practical. For instance, currently, the AI model only works in a controlled lab environment and requires a cumbersome setup. 

Turning it into a practical noninvasive BCI that could be used for healthcare and other purposes seems quite challenging at this stage. Moreover, the current studies involved only 35 subjects.

It would be interesting to see if the Meta team could overcome these challenges before its rivals come up with a better thought-to-text AI system. 

Note: Both studies are yet to be peer-reviewed. You can read them here and here.

share Share

University of Zurich Researchers Secretly Deployed AI Bots on Reddit in Unauthorized Study

The revelation has sparked outrage across the internet.

Giant Brain Study Took Seven Years to Test the Two Biggest Theories of Consciousness. Here's What Scientists Found

Both came up short but the search for human consciousness continues.

The Cybertruck is all tricks and no truck, a musky Tesla fail

Tesla’s baking sheet on wheels rides fast in the recall lane toward a dead end where dysfunctional men gather.

British archaeologists find ancient coin horde "wrapped like a pasty"

Archaeologists discover 11th-century coin hoard, shedding light on a turbulent era.

Astronauts May Soon Eat Fresh Fish Farmed on the Moon

Scientists hope Lunar Hatch will make fresh fish part of space missions' menus.

Scientists Detect the Most Energetic Neutrino Ever Seen and They Have No Idea Where It Came From

A strange particle traveled across the universe and slammed into the deep sea.

Autism rates in the US just hit a record high of 1 in 31 children. Experts explain why it is happening

Autism rates show a steady increase but there is no simple explanation for a "supercomplex" reality.

A New Type of Rock Is Forming — and It's Made of Our Trash

At a beach in England, soda tabs, zippers, and plastic waste are turning into rock before our eyes.

A LiDAR Robot Might Just Be the Future of Small-Scale Agriculture

Robots usually love big, open fields — but most farms are small and chaotic.

Scientists put nanotattoos on frozen tardigrades and that could be a big deal

Tardigrades just got cooler.