Quantcast
ZME Science
  • CoronavirusNEW
  • News
  • Environment
    • Climate
    • Animals
    • Renewable Energy
    • Eco tips
    • Environmental Issues
    • Green Living
  • Health
    • Alternative Medicine
    • Anatomy
    • Diseases
    • Genetics
    • Mind & Brain
    • Nutrition
  • Future
  • Space
  • Feature
    • Feature Post
    • Art
    • Great Pics
    • Design
    • Fossil Friday
    • AstroPicture
    • GeoPicture
    • Did you know?
    • Offbeat
  • More
    • About
    • The Team
    • Advertise
    • Contribute
    • Our stance on climate change
    • Privacy Policy
    • Contact
No Result
View All Result
ZME Science

No Result
View All Result
ZME Science
No Result
View All Result
Home Science News

People find robots that make mistakes more likable, more intelligent

If only I was a robot :(.

Alexandru Micu by Alexandru Micu
August 9, 2017
in News, Robotics, Science

If you’re a robot, it might actually make you more popular to pepper in some mistakes or errors. It will make the humans like you more, their researchers show.

NAO robot.
Image credits Stephen Chin / Flickr.

In a recent study, an international team led by researchers from the Center for Human-Computer Interaction, University of Salzburg, Austria, looked into how people react to robots that mess up. Surprisingly, participants liked the faulty robots much more than those who performed without a hitch.

The field of social robotics is a rapidly advancing one, but we’re still not at a point where robots can operate in a social setting without making errors. Despite this fact, most research in this field starts from the assumptions that the bots will perform flawlessly, and any results stemming from unforeseeable conditions during trials are often excluded from the final analysis, according to first author Nicole Mirnig.

ADVERTISEMENT

“It lies within the nature of thorough scientific research to pursue a strict code of conduct. However, we suppose that faulty instances of human-robot interaction are full with knowledge that can help us further improve the interactional quality in new dimensions,” she adds. “We think that because most research focuses on perfect interaction, many potentially crucial aspects are overlooked.”

Which is a shame, the team believes. Teaching a bot how do understand the social signals of those around them can let it know when it’s performed an error so it can react accordingly, the team says.

To determine what those social signals are, the team devised a trial in which human participants had to interact with a human-like NAO robot. The robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. The trick was that NAO was programmed to sometimes fail at the task while interacting with some of the participants, just to see how they react. Afterward, the participants rated the robot’s likability, anthropomorphism, and perceived intelligence. The team “video-coded the social signals the participants showed during their interaction with the robot” as well as the answers each participant gave.

Get more science news like this...

Join the ZME newsletter for amazing science news, features, and exclusive scoops. More than 40,000 subscribers can't be wrong.

   

The data shows that although the participants recognized the robot’s mistakes, they did not necessarily dismiss it. The annotations of the video data also revealed that gaze shifts (from an object to the robot or vice versa) and laughter are typical reactions to unexpected robot behavior. Finally, the participants’ reports show that people actually liked the robot much more when it made mistakes than when it performed its task flawlessly.

ADVERTISEMENT

“This finding confirms the Pratfall Effect, which states that people’s attractiveness increases when they make a mistake,” says Nicole Mirnig. “Specifically exploring erroneous instances of interaction could be useful to further refine the quality of human-robotic interaction.”

“For example, a robot that understands that there is a problem in the interaction by correctly interpreting the user’s social signals, could let the user know that it understands the problem and actively apply error recovery strategies.”

The work is particularly exciting as it shows just how important potential imperfections in robot design can be. Assuming that robots have to perform perfectly at all times may not be just unfeasible, it might actually be counterproductive. Embracing the risk of error inherent in the field of social robot technology would allow us to develop better robots by freeing them to make mistakes — and learning from them.

On the flip side, it will also make the bots more likable.

The paper “To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot” has been published in the journal Frontiers in Robotics and AI.

Tags: Human-robot interactionMistakesSocial Robot
Alexandru Micu

Alexandru Micu

Stunningly charming pun connoisseur, I have been fascinated by the world around me since I first laid eyes on it. Always curious, I'm just having a little fun with some very serious science.

Follow ZME on social media

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
  • Coronavirus
  • News
  • Environment
  • Health
  • Future
  • Space
  • Feature
  • More

© 2007-2019 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Coronavirus
  • News
  • Environment
    • Climate
    • Animals
    • Renewable Energy
    • Eco tips
    • Environmental Issues
    • Green Living
  • Health
    • Alternative Medicine
    • Anatomy
    • Diseases
    • Genetics
    • Mind & Brain
    • Nutrition
  • Future
  • Space
  • Feature
    • Feature Post
    • Art
    • Great Pics
    • Design
    • Fossil Friday
    • AstroPicture
    • GeoPicture
    • Did you know?
    • Offbeat
  • More
    • About
    • The Team
    • Advertise
    • Contribute
    • Our stance on climate change
    • Privacy Policy
    • Contact

© 2007-2019 ZME Science - Not exactly rocket science. All Rights Reserved.