homehome Home chatchat Notifications


Humans and computers can be fooled by the same tricky images

The gap between humans and AI is getting narrower by the day.

Tibi Puiu
March 23, 2019 @ 12:00 am

share Share

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

The ultimate goal of artificial intelligence (AI) research is to fully mimic the human brain. Right now, humans still have the upper hand but AI is advancing at a phenomenal pace. Some argue that AIs enabled by artificial neural networks still have a long way to go seeing how such systems can sometimes be easily fooled by certain cues like ambiguous images (i.e. television static). However, a new study suggests that humans aren’t necessarily any better. The findings show that humans can make the same wrong decisions a machine would in some situations. We’re already not that different from the machines we built in our image, researchers point out.

“Most of the time, research in our field is about getting computers to think like people,” says senior author Chaz Firestone, an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences. “Our project does the opposite—we’re asking whether people can think like computers.”

Quick: what’s 19×926? I’ll save you the trouble — it’s 17,594. It took my computer a fraction of a fraction of a second to give me the right answer. But while we all know computers are far better than humans at crunching raw numbers, they’re quite ill-equipped in other areas where humans perform effortlessly. Identifying objects is one of them, for instance. We can easily recognize that an object is a chair or a table, a task that AIs have only recently begun to perform decently.

AIs are what enable self-driving cars to scan their surroundings and read traffic lights or recognize pedestrians. Elsewhere, in medicine, AIs are now combing through millions of images, spotting cancer or other diseases from radiological scans. With each iteration, these machines ‘learn’ and are able to come up with a better result next time.

But despite considerable advances, AI pattern recognition can sometimes go horribly wrong. What’s more, researchers in the field are worried that some nefarious agents might exploit this fact to purposefully fool AIs. Just reconfiguring some pixels can sometimes be enough to through off an AI. In a security context, this can be troublesome.

Firestone and colleagues wanted to investigate how humans fair in situations where AI cannot come to an unambiguous answer. The research team showed 1,800 people a series of images that had previously tricked computers and gave the participants the same kind of labeling options that the machine had. The participants had to guess which of two options the computer had chosen — one being the computer’s decision, the other being a random answer. The video below explains how all of this works.

“These machines seem to be misidentifying objects in ways humans never would,” Firestone says. “But surprisingly, nobody has really tested this. How do we know people can’t see what the computers did?”

Computers identified the following images as a digital clock, a crossword puzzle, a king penguin, and an assault rifle. Credit: John Hopkins.

The participants chose the same answer as computers 75% of the time. Interestingly, when the game was changed to give people a choice between a computer’s first answer and its next-best guess (i.e. a bagel or a pretzel), humans validated the machine’s first choice 91% of the time. The findings suggest that the gap between human and machine isn’t that wide as some might think. As for whether the people part of the study thought like a machine, I personally think that the framing is a bit off. These machines were designed by humans, and as such their intentions are modeled off humans. If anything, these findings show that machines are behaving more and more like humans — and not the other way around.

“The neural network model we worked with is one that can mimic what humans do at a large scale, but the phenomenon we were investigating is considered to be a critical flaw of the model,” said lead author Zhenglong Zhou. “Our study was able to provide evidence that the flaw might not be as bad as people thought. It provides a new perspective, along with a new experimental paradigm that can be explored.”

The findings appeared in the journal Nature Communications.

share Share

Archaeologists May Have Found Odysseus’ Sanctuary on Ithaca

A new discovery ties myth to place, revealing centuries of cult worship and civic ritual.

The World’s Largest Sand Battery Just Went Online in Finland. It could change renewable energy

This sand battery system can store 1,000 megawatt-hours of heat for weeks at a time.

A Hidden Staircase in a French Church Just Led Archaeologists Into the Middle Ages

They pulled up a church floor and found a staircase that led to 1500 years of history.

The World’s Largest Camera Is About to Change Astronomy Forever

A new telescope camera promises a 10-year, 3.2-billion-pixel journey through the southern sky.

AI 'Reanimated' a Murder Victim Back to Life to Speak in Court (And Raises Ethical Quandaries)

AI avatars of dead people are teaching courses and testifying in court. Even with the best of intentions, the emerging practice of AI ‘reanimations’ is an ethical quagmire.

This Rare Viking Burial of a Woman and Her Dog Shows That Grief and Love Haven’t Changed in a Thousand Years

The power of loyalty, in this life and the next.

This EV Battery Charges in 18 Seconds and It’s Already Street Legal

RML’s VarEVolt battery is blazing a trail for ultra-fast EV charging and hypercar performance.

DARPA Just Beamed Power Over 5 Miles Using Lasers and Used It To Make Popcorn

A record-breaking laser beam could redefine how we send power to the world's hardest places.

Why Do Some Birds Sing More at Dawn? It's More About Social Behavior Than The Environment

Study suggests birdsong patterns are driven more by social needs than acoustics.

Nonproducing Oil Wells May Be Emitting 7 Times More Methane Than We Thought

A study measured methane flow from more than 450 nonproducing wells across Canada, but thousands more remain unevaluated.