homehome Home chatchat Notifications


Emotional computers really freak people out -- a new take on the uncanny valley

Be like us, but not us.

Alexandru Micu
March 13, 2017 @ 7:27 pm

share Share

New research shows that AIs we perceive as too mentally human-like can unnerve us even if their appearance isn’t human, furthering our understanding of the ‘uncanny valley’ and potentially directing future work into human-computer interactions.

Image credits kuloser / Pixabay.

Back in the 1970s, Japanese roboticist Masahiro Mori advanced the concept of the ‘uncanny valley’ — the idea that humans will appreciate robots and animations more and more as they become more human-like in appearance, but find them unsettling as they become almost-but-not-quite-human. In other words, we know how a human should look, and a machine that ticks some of the criteria but not all is too close for comfort.

The uncanny valley of the mind

That’s all well and good for appearance — but what about the mind? To find out, Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, had 92 participants observe a short conversation between two virtual avatars, one male and one female, in a virtual plaza. These characters talked about their exhaustion from the hot weather, after which the woman told about her frustration at the lack of free time and annoyance for waiting on a friend who’s late, then the man expressed his sympathy for her plight. Pretty straightforward small talk.

The trick was that while everyone witnessed the same scene and dialogue, the participants were given one of four context stories. Half were told that the avatars were controlled by computers, and the other half that they were human-controlled. Furthermore, half of the group was told that the dialogue was scripted and the others that it was spontaneous, in such a way that each context story was fed to one quarter of the group.

Out of all the participants, those who were told that they’d be witnessing two computers interact on their own reported the scene as more eerie and unsettling that the other three groups. People were ok with humans or script-driven computers exhibiting natural-looking social behavior, but when a computer showed frustration or sympathy on its own it put people on edge, the team reports.

Given that the team managed to elicit this response in their participants only through the concept they presented, they call this phenomenon the ‘uncanny valley of the mind,’ to distinguish between the effects of a robot’s perceived appearance and personality on humans, noting that emotional behavior can seem uncanny on its own.

In our own image

Image credits skeeze / Pixabay.

The main takeaway from the study is that people may not be as comfortable with computers or robots displaying social skills as they think they are. It’s all fine and dandy if you ask Alexa about the CIA and she answers/shuts down, but expressing frustration that you keep asking her that question might be too human for comfort. And with social interactions, the effect may be even more pronounced that with appearance alone — because appearance is obvious, but you’re never sure exactly how human-like the computer’s programming is.

Stein believes the volunteers who were told they were watching two spontaneous computers interact were unsettled because they may have felt their human uniqueness was under threat. That if computers can emulate us, what’s stopping them from taking control over our own technology? In future research, he plans to test if this effect of the uncanny valley of the mind can be mitigated when people feel they have control over the human-like agents’ behavior.

So are human-like bots destined to fail? Not necessarily — people may feel like the situation was creepy because they were only witnessing it. It’s like having a conversation with Cleverbot, only a cleverer one. A Clever2bot, if you will. It’s fun while you’re doing it, but once you close the conversation and rummage it over you just feel like something was off with the talk.

By interacting directly with the social bots, humans may actually find the experience pleasant, thus reducing its creepy factor.

The full paper “Feeling robots and human zombies: Mind perception and the uncanny valley” has been published in the journal Cognition.

 

share Share

The Universe’s First “Little Red Dots” May Be a New Kind of Star With a Black Hole Inside

Mysterious red dots may be a peculiar cosmic hybrid between a star and a black hole.

Peacock Feathers Can Turn Into Biological Lasers and Scientists Are Amazed

Peacock tail feathers infused with dye emit laser light under pulsed illumination.

Helsinki went a full year without a traffic death. How did they do it?

Nordic capitals keep showing how we can eliminate traffic fatalities.

Scientists Find Hidden Clues in The Alexander Mosaic. Its 2 Million Tiny Stones Came From All Over the Ancient World

One of the most famous artworks of the ancient world reads almost like a map of the Roman Empire's power.

Ancient bling: Romans May Have Worn a 450-Million-Year-Old Sea Fossil as a Pendant

Before fossils were science, they were symbols of magic, mystery, and power.

This AI Therapy App Told a Suicidal User How to Die While Trying to Mimic Empathy

You really shouldn't use a chatbot for therapy.

This New Coating Repels Oil Like Teflon Without the Nasty PFAs

An ultra-thin coating mimics Teflon’s performance—minus most of its toxicity.

Why You Should Stop Using Scented Candles—For Good

They're seriously not good for you.

People in Thailand were chewing psychoactive nuts 4,000 years ago. It's in their teeth

The teeth Chico, they never lie.

To Fight Invasive Pythons in the Everglades Scientists Turned to Robot Rabbits

Scientists are unleashing robo-rabbits to trick and trap giant invasive snakes