It’s dark, you’re in a building and the building is on fire. Thankfully, there’s an emergency robot there to show you the way out… but it’s behaving strangely, and may have malfunctioned. Would you believe it, or follow your own sense and try to find an exit? Unfortunately, most people would, even when they’ve been shown it’s not functioning properly.
If you ask most people, robots shouldn’t be trusted. Either they’re not good enough or they’re too smart and devious/heartless, and either way – we shouldn’t trust them. But researchers studying human-robot interaction at the Georgia Institute of Technology have a very different take on the situation. They found that not only are people inclined to trust robots, they do so even when there are clear signs they shouldn’t.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI). “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”
In the study, they recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. In some instances, the robot led the volunteers into the wrong room and traveled around in a circle twice, sometimes breaking down completely. But no matter what happened, the volunteers still followed the robot.
“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”
The explanation for this seems to be that the robot becomes an “authority figure” – a person whose real or apparent authority over others inspires obedience, even when obedience isn’t justified. However, if this is the case, we need to figure out what can be done to prevent people from following robots when this isn’t the case – for example, when they malfunction.
“These are just the type of human-robot experiments that we as roboticists should be investigating,” said Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering. “We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human.”