A Cornell University experiment shows that robot guards aren’t quite here. The team placed a snack table labelled “reserved” in a student common room and stationed a robotic guard to watch over it — without much success.
Dr. Guy Hoffman of Cornell University of Ithaca, New York placed a mObi robot, made by US robotics company Bossa Nova, to guard a table of food labeled “reserved” from a group of ravenous students. It doesn’t have a threatening appearance, but it does come equipped with eyes that let it look around the room. To help him understand how people behave around robots, Dr. Hoffman was recreating a psychology experiment in which a picture of eyes, placed in an easily visible spot, made people behave more honestly.
So in theory, having the robot watch the table should have deterred the students from stealing food off the table — in reality, it proved to be a poor deterrent. Seven per cent of passers-by still took food from the table, only slightly fewer than the 8 percent who did so when the table wasn’t guarded at all. In contrast, only 2 percent of people pinched a snack when a human was sitting at the table.
“We talk about robots being in healthcare and education and the government and the military — these places where ethical behaviour is a big issue,” Dr. Hoffman says.
A GoPro camera hidden nearby recorded the behavior of hundreds of people as they walked by the food table. Many of them seem to be more interested in the robot’s reactions, taking food just to see if the robot would stop them. One student simply told his friend to turn the robot around so he could take some food. Another was recorded saying “It’s not listening. It’s a robot, we could take a cookie,” Hoffman and his colleagues said at the IEEE International Symposium on Robot and Human Interactive Communication in New York City last month.
“We found the robot made people engage more with the situation, trying to understand what was going on,” said Hoffman. “They’d wonder: ‘Why is there food? Why is there a robot?’ It raised their curiosity. In the end, however, they would take as much food with the robot present as without a robot.”
Matthias Scheutz, department of computer science director at Tufts University in Medford, Massachusetts, isn’t surprised. He says the results come down to the robot’s lack of social engagement, such as speaking or “watching” people.
“Even if people thought that the robot was able to see them, it is very unlikely they thought that such a robot could object or report them,” he says.
Which isn’t very surprising — a guard that won’t stop you from doing what you’re not allowed to isn’t much of a guard at all. Unless the robot has a reaction to a would-be theft, people will simply ignore it. It doesn’t even have to intervene to stop a thief — just looking disapprovingly at them or protesting audibly would help.
Team member Jodi Forlizzi at Carnegie Mellon University suggests that even dressing the robot up as a security guard, with dark-blue suit and badge, would help.
“Really subtle changes in how the robot looks or behaves can drastically influence how people interpret it,” she says.
Hoffman himself was a bit surprised by the results, diametrically different from his previous results.
“We thought the robot have some effect on the results. We feel that somehow it comes down to this idea of being judged more than being monitored,” he concluded.
The paper is still awaiting peer-review.