In the 1940s when real robots, let alone artificial intelligence, weren’t around, famed SciFi author Isaac Asimov set forth a set of laws known as the “Three Laws of Robotics”. These state:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Above all else, it seems, a robot must obey a human at all times, except when such an order might harm a human. Researchers at Tufts University Human-Robot Interaction Lab, however, use their own set of rules – one where robots can reject a command. That sounds like the plot for a bad movie about human annihilation at the hand of artificial overlords.
If you think about it for a moment, though, it makes sense. Humans aren’t exactly perfectly rational, so we sometimes make, in lack of a better word, stupid decisions. Passing on these decisions to robots could potentially have drastic consequences. What Tufts researchers are suggesting is applying in a similar fashion the reasoning humans use to assess a command to robots. According to IEEE, linguistics theory says humans assess a request by following so-called Felicity conditions. These are:
Knowledge: Do I know how to do X?
Capacity: Am I physically able to do X now? Am I normally physically able to do X?
Goal priority and timing: Am I able to do X right now?
Social role and obligation: Am I obligated based on my social role to do X?
Normative permissibility: Does it violate any normative principle to do X?
The first three are self-explanatory. The forth condition basically says “can I trust you? who are you to tell me what to do?”. The fifth rule says “OK, but if I do that do I break any rules? (civil and criminal laws for humans, and the possibly an altered version of Asimov’s Laws of Robotics for robots). In the videos below, Tufts researchers demonstrate their work .
First, a robot set on a table is ordered to walk over it.The robot registers however that by doing so it would fall off and possibly damage itself, so it rejects the order. The researcher, however, changes the framework by telling the robot “I will catch you”, then the robots amazingly complies. It’s worth noting that the robot didn’t have these exact conditions preprogrammed. Natural language processing lends the robot a sort of understanding of what the human means in a general way “You will not violate Rule X because the circumstances that would cause damage are rendered void”.
Next, the robot is instructed to “move forward” through an obstacle, which the robot graciously disobeys because it violates a set of rules that say “obstacle? don’t budge any more”. So, the researcher asks the robot to disable its obstacle detection system. In this case, the Felicity condition #4 isn’t met because the human didn’t have the required privileges.
In the final video, the same situation is presented only now the human that makes the command has the required trust necessary to fulfill the command.
At Tufts, the researchers are also working on a project called Moral competence in Computational Architectures for Robots, which seeks to “identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.”
“Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.”
“The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. “
Could in the future robots become the ultimate ethical agents? Imagine all the virtues, morals and sound ethics amassed through countless ages downloaded inside a computer. An android Buddha. That would be interesting, but in the meantime Tufts researchers are right: there are situations when robots should disobey a command, simply because it might be stupid. At the same time, this sets a dangerous precedent. Earlier today, I wrote about a law passed by Congress that regulates mined resources from space. Maybe it’s time we see an international legal framework that compels developers not to implement certain rules in their robots’ programming, or conversely implement certain requirements. That would definitely be something I think everybody agrees is warranted and important – if it only didn’t interfere with the military.