
For many people with severe brain injuries, waking up in a hospital room feels like waking up in a nightmare. You’re stuck in your body, aware of everything around you, but unable to move or speak. This is the experience of “covert consciousness,” where a person remains inwardly aware but outwardly unresponsive.
Until recently, doctors relied on crude visual tests to assess whether patients in comas or vegetative states could truly perceive the world around them.
Now, an innovative artificial intelligence (AI) tool is offering a new way to detect consciousness, revealing that some patients may be aware of their surroundings days before traditional methods would ever pick it up.
AI Detects Covert Consciousness
People who seem to be in deep comas may still have some awareness. This phenomenon, known as covert consciousness, was first observed in 2006, when researchers found that a woman in a vegetative state responded to brain scans the same way as healthy volunteers who imagined performing physical tasks. More recent studies have investigated this phenomenon in more detail, showing that up to a quarter of people who seem unresponsive still demonstrate brain activity when given simple voice commands.
But detecting these subtle signs has always been a challenge. The current gold standard, brain imaging, is expensive, complex, and not available in most hospitals. So researchers at Stony Brook University, led by computational neuroscientist Sima Mofakham, turned to a more accessible solution: AI.
Their new tool, dubbed SeeMe, uses computer vision to track the tiniest facial movements in patients who are believed to be unconscious. This technology can pick up on small movements that are often too subtle for human eyes to detect, such as the twitch of a muscle or a barely perceptible shift in the skin. In their recent study, published in Communications Medicine, Mofakham and her team found that SeeMe was able to detect signs of consciousness up to eight days earlier than clinicians could.
This groundbreaking technology could change how doctors and families approach care for those with severe brain injuries. Rather than relying on subjective, visual assessments, SeeMe provides objective, real-time data on the movements that can offer early clues about a patient’s recovery potential. And it might even open the door to communication with patients who were previously thought to be unreachable.
The Science Behind SeeMe
The core idea behind SeeMe is simple: track facial movements with extreme precision. By analyzing videos of patients with brain injuries, SeeMe looks for even the smallest changes in facial expression — movements so minute that they can’t be spotted by the human eye. The team asked patients to perform tasks like “stick out your tongue,” “open your eyes,” and “show me a smile,” all of which activate various facial muscles. The AI then examined these movements at the level of individual skin pores.
Out of 37 patients with severe brain injuries, SeeMe detected eye-opening responses in 30 and mouth movements in 16. What makes this tool so promising is that it was able to detect these movements significantly earlier than physicians could, sometimes by days. For example, in one case, SeeMe detected mouth movements on day 18 after admission, while the patient did not show clear signs of motor command response until day 37. The correlation was clear: patients who exhibited more frequent and pronounced facial movements during this early phase tended to have better outcomes in the long run, recovering quicker and more effectively after discharge.
The AI’s ability to detect these subtle movements provides a more reliable way of tracking consciousness, especially when patients are unable to respond to routine medical exams like eye opening or hand squeezing. Mofakham and her team argue that these early signs can be crucial for doctors and families deciding how to proceed with care, from palliative measures to aggressive rehabilitation.
Awareness In Coma Patients That Doctors Often Miss
Beyond its immediate clinical applications, SeeMe could also pave the way for a future where patients with long-term brain injuries may have their awareness detected earlier and with greater certainty. As it stands, detecting consciousness after brain trauma is still an unpredictable and often slow process. “When somebody recovers consciousness, it’s almost like a flickering light bulb,” says neurologist Jan Claassen. The recovery of awareness doesn’t happen all at once. Instead, it’s a gradual process that unfolds over time, and SeeMe could act as an early indicator of this flickering light.
Looking forward, Mofakham and her team hope to refine their tool further by expanding its capacity to analyze other forms of movement, such as electrical signals from muscles. They also aim to create a “yes or no” system, enabling patients who are conscious but unable to move or speak to answer simple questions through facial cues.
As Mofakham told Scientific American, the ethical implications are profound. “People who cannot communicate cannot participate in their care,” she explains. By enabling patients to express their awareness, SeeMe could give them a voice in their own treatment — something that has long been out of reach for those suffering from severe brain injuries.