homehome Home chatchat Notifications


A software bug could render the last 15 years of brain research meaningless

Some 40,000 studies need to be re-examined. Ouch.

Alexandru Micu
July 7, 2016 @ 5:04 pm

share Share

A new study suggests that our fMRI technology might be relying on faulty algorithms — a bug the researchers found in fMRI-specific software could invalidate the past 15 years of research into human brain activity.

Image credits Kai Stachowiak/Publicdomainpictures

The best tool we have to measure brain activity today is functional magnetic resonance imaging (fMRI.) It’s so good in fact that we’ve come to rely on it heavily — which isn’t a bad thing, as long as the method is sound and provides accurate readings. But if the method is flawed, the results of years of research about what our brains look like during exercise, gaming, love, drug usage and more would be put under question. Researchers from Linköping University in Sweden have performed a study of unprecedented scale to test the efficiency of fMRI, and their results are not encouraging.

“Despite the popularity of fMRI as a tool for studying brain function, the statistical methods used have rarely been validated using real data,” the researchers write.

The team lead by Anders Eklund gathered rest-state fMRI data from 499 healthy individuals from databases around the world and split them intro 20 groups. They then measured them against each other, resulting in a staggering 3 million random comparisons. They used these pairs to test the three most popular software packages for fMRI analysis – SPM, FSL, and AFNI.

While the team expected to see some differences between the packages (of around 5 percent), the findings stunned them: the software resulted in false-positive rates of up to 70 percent. This suggests that some of the results are so inaccurate that they might be showing brain activity where there is none — in other words, the activity they show is the product of the software’s algorithm, not of the brain being studied.

“These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results,” the paper reads.

One of the bugs they identified has been in the systems for the past 15 years. It was finally corrected in May 2015, at the time the team started writing their paper, but the findings still call into question the findings of papers relying on fMRI before this point.

So what is actually wrong with the method? Well, fMRI relies on a massive magnetic field pulsating through a subject’s body that can pick up on changes of blood flow in areas of the brain. These minute changes signal that certain brain regions have increased or decreased their activity, and the software interprets them as such. The issue is that when scientists are looking at the data they’re not looking at the actual brain — what they’re seeing at is an image of the brain divided into tiny ‘voxels’, then interpreted by a computer program, said Richard Chirgwin for The Register.

“Software, rather than humans … scans the voxels looking for clusters,” says Chirgwin. “When you see a claim that ‘Scientists know when you’re about to move an arm: these images prove it,’ they’re interpreting what they’re told by the statistical software.”

Because fMRI machines are expensive to use — around US$600 per hour — studies usually employ small sample sizes and there are very few (if any) replication experiments done to confirm the findings. Validation technology has also been pretty limited up to now.

Since fMRI machines became available in the early ’90s, neuroscientists and psychologists have been faced with a whole lot of challenges when it comes to validating their results. But Eklund is confident that as fMRI results are being made freely available online and validation technology is finally picking up, more replication experiments can be done and bugs in the software identified much more quickly.

“It could have taken a single computer maybe 10 or 15 years to run this analysis,” Eklund told Motherboard. “But today, it’s possible to use a graphics card”, to lower the processing time “from 10 years to 20 days”.

So what the nearly 40,000 papers that could now be in question? All we can do is try to replicate their findings, and see which work and which don’t.

The full paper, titled “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates,” has been published online in the journal PNAS.

share Share

“How Fat Is Kim Jong Un?” Is Now a Cybersecurity Test

North Korean IT operatives are gaming the global job market. This simple question has them beat.

This New Atomic Clock Is So Precise It Won’t Lose a Second for 140 Million Years

The new clock doesn't just keep time — it defines it.

A Soviet shuttle from the Space Race is about to fall uncontrollably from the sky

A ghost from time past is about to return to Earth. But it won't be smooth.

The world’s largest wildlife crossing is under construction in LA, and it’s no less than a miracle

But we need more of these massive wildlife crossings.

Your gold could come from some of the most violent stars in the universe

That gold in your phone could have originated from a magnetar.

Ronan the Sea Lion Can Keep a Beat Better Than You Can — and She Might Just Change What We Know About Music and the Brain

A rescued sea lion is shaking up what scientists thought they knew about rhythm and the brain

Did the Ancient Egyptians Paint the Milky Way on Their Coffins?

Tomb art suggests the sky goddess Nut from ancient Egypt might reveal the oldest depiction of our galaxy.

Dinosaurs Were Doing Just Fine Before the Asteroid Hit

New research overturns the idea that dinosaurs were already dying out before the asteroid hit.

Denmark could become the first country to ban deepfakes

Denmark hopes to pass a law prohibiting publishing deepfakes without the subject's consent.

Archaeologists find 2,000-year-old Roman military sandals in Germany with nails for traction

To march legionaries across the vast Roman Empire, solid footwear was required.