homehome Home chatchat Notifications


Humans and computers can be fooled by the same tricky images

The gap between humans and AI is getting narrower by the day.

Tibi Puiu
March 23, 2019 @ 12:00 am

share Share

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

The ultimate goal of artificial intelligence (AI) research is to fully mimic the human brain. Right now, humans still have the upper hand but AI is advancing at a phenomenal pace. Some argue that AIs enabled by artificial neural networks still have a long way to go seeing how such systems can sometimes be easily fooled by certain cues like ambiguous images (i.e. television static). However, a new study suggests that humans aren’t necessarily any better. The findings show that humans can make the same wrong decisions a machine would in some situations. We’re already not that different from the machines we built in our image, researchers point out.

“Most of the time, research in our field is about getting computers to think like people,” says senior author Chaz Firestone, an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences. “Our project does the opposite—we’re asking whether people can think like computers.”

Quick: what’s 19×926? I’ll save you the trouble — it’s 17,594. It took my computer a fraction of a fraction of a second to give me the right answer. But while we all know computers are far better than humans at crunching raw numbers, they’re quite ill-equipped in other areas where humans perform effortlessly. Identifying objects is one of them, for instance. We can easily recognize that an object is a chair or a table, a task that AIs have only recently begun to perform decently.

AIs are what enable self-driving cars to scan their surroundings and read traffic lights or recognize pedestrians. Elsewhere, in medicine, AIs are now combing through millions of images, spotting cancer or other diseases from radiological scans. With each iteration, these machines ‘learn’ and are able to come up with a better result next time.

But despite considerable advances, AI pattern recognition can sometimes go horribly wrong. What’s more, researchers in the field are worried that some nefarious agents might exploit this fact to purposefully fool AIs. Just reconfiguring some pixels can sometimes be enough to through off an AI. In a security context, this can be troublesome.

Firestone and colleagues wanted to investigate how humans fair in situations where AI cannot come to an unambiguous answer. The research team showed 1,800 people a series of images that had previously tricked computers and gave the participants the same kind of labeling options that the machine had. The participants had to guess which of two options the computer had chosen — one being the computer’s decision, the other being a random answer. The video below explains how all of this works.

“These machines seem to be misidentifying objects in ways humans never would,” Firestone says. “But surprisingly, nobody has really tested this. How do we know people can’t see what the computers did?”

Computers identified the following images as a digital clock, a crossword puzzle, a king penguin, and an assault rifle. Credit: John Hopkins.

The participants chose the same answer as computers 75% of the time. Interestingly, when the game was changed to give people a choice between a computer’s first answer and its next-best guess (i.e. a bagel or a pretzel), humans validated the machine’s first choice 91% of the time. The findings suggest that the gap between human and machine isn’t that wide as some might think. As for whether the people part of the study thought like a machine, I personally think that the framing is a bit off. These machines were designed by humans, and as such their intentions are modeled off humans. If anything, these findings show that machines are behaving more and more like humans — and not the other way around.

“The neural network model we worked with is one that can mimic what humans do at a large scale, but the phenomenon we were investigating is considered to be a critical flaw of the model,” said lead author Zhenglong Zhou. “Our study was able to provide evidence that the flaw might not be as bad as people thought. It provides a new perspective, along with a new experimental paradigm that can be explored.”

The findings appeared in the journal Nature Communications.

share Share

The Universe’s First “Little Red Dots” May Be a New Kind of Star With a Black Hole Inside

Mysterious red dots may be a peculiar cosmic hybrid between a star and a black hole.

Peacock Feathers Can Turn Into Biological Lasers and Scientists Are Amazed

Peacock tail feathers infused with dye emit laser light under pulsed illumination.

Helsinki went a full year without a traffic death. How did they do it?

Nordic capitals keep showing how we can eliminate traffic fatalities.

Scientists Find Hidden Clues in The Alexander Mosaic. Its 2 Million Tiny Stones Came From All Over the Ancient World

One of the most famous artworks of the ancient world reads almost like a map of the Roman Empire's power.

Ancient bling: Romans May Have Worn a 450-Million-Year-Old Sea Fossil as a Pendant

Before fossils were science, they were symbols of magic, mystery, and power.

This AI Therapy App Told a Suicidal User How to Die While Trying to Mimic Empathy

You really shouldn't use a chatbot for therapy.

This New Coating Repels Oil Like Teflon Without the Nasty PFAs

An ultra-thin coating mimics Teflon’s performance—minus most of its toxicity.

Why You Should Stop Using Scented Candles—For Good

They're seriously not good for you.

People in Thailand were chewing psychoactive nuts 4,000 years ago. It's in their teeth

The teeth Chico, they never lie.

To Fight Invasive Pythons in the Everglades Scientists Turned to Robot Rabbits

Scientists are unleashing robo-rabbits to trick and trap giant invasive snakes