If you’re a robot, it might actually make you more popular to pepper in some mistakes or errors. It will make the humans like you more, their researchers show.

NAO robot.

Image credits Stephen Chin / Flickr.

In a recent study, an international team led by researchers from the Center for Human-Computer Interaction, University of Salzburg, Austria, looked into how people react to robots that mess up. Surprisingly, participants liked the faulty robots much more than those who performed without a hitch.

The field of social robotics is a rapidly advancing one, but we’re still not at a point where robots can operate in a social setting without making errors. Despite this fact, most research in this field starts from the assumptions that the bots will perform flawlessly, and any results stemming from unforeseeable conditions during trials are often excluded from the final analysis, according to first author Nicole Mirnig.

“It lies within the nature of thorough scientific research to pursue a strict code of conduct. However, we suppose that faulty instances of human-robot interaction are full with knowledge that can help us further improve the interactional quality in new dimensions,” she adds. “We think that because most research focuses on perfect interaction, many potentially crucial aspects are overlooked.”

Which is a shame, the team believes. Teaching a bot how do understand the social signals of those around them can let it know when it’s performed an error so it can react accordingly, the team says.

Subscribe to our newsletter and receive our new book for FREE
Join 50,000+ subscribers vaccinated against pseudoscience
Download NOW
By subscribing you agree to our Privacy Policy. Give it a try, you can unsubscribe anytime.

To determine what those social signals are, the team devised a trial in which human participants had to interact with a human-like NAO robot. The robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. The trick was that NAO was programmed to sometimes fail at the task while interacting with some of the participants, just to see how they react. Afterward, the participants rated the robot’s likability, anthropomorphism, and perceived intelligence. The team “video-coded the social signals the participants showed during their interaction with the robot” as well as the answers each participant gave.

The data shows that although the participants recognized the robot’s mistakes, they did not necessarily dismiss it. The annotations of the video data also revealed that gaze shifts (from an object to the robot or vice versa) and laughter are typical reactions to unexpected robot behavior. Finally, the participants’ reports show that people actually liked the robot much more when it made mistakes than when it performed its task flawlessly.

“This finding confirms the Pratfall Effect, which states that people’s attractiveness increases when they make a mistake,” says Nicole Mirnig. “Specifically exploring erroneous instances of interaction could be useful to further refine the quality of human-robotic interaction.”

“For example, a robot that understands that there is a problem in the interaction by correctly interpreting the user’s social signals, could let the user know that it understands the problem and actively apply error recovery strategies.”

The work is particularly exciting as it shows just how important potential imperfections in robot design can be. Assuming that robots have to perform perfectly at all times may not be just unfeasible, it might actually be counterproductive. Embracing the risk of error inherent in the field of social robot technology would allow us to develop better robots by freeing them to make mistakes — and learning from them.

On the flip side, it will also make the bots more likable.

The paper “To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot” has been published in the journal Frontiers in Robotics and AI.