homehome Home chatchat Notifications


Scientists are teaching robots to say 'No' to commands. Is that a good thing?

Researchers at Tufts alter the laws of robotics to teach robots to say "no".

Tibi Puiu
November 27, 2015 @ 2:07 pm

share Share

laws robotics

In the 1940s when real robots, let alone artificial intelligence, weren’t around, famed SciFi author Isaac Asimov set forth a set of laws known as the “Three Laws of Robotics”. These state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Above all else, it seems, a robot must obey a human at all times, except when such an order might harm a human. Researchers at Tufts University Human-Robot Interaction Lab, however, use their own set of rules – one where robots can reject a command. That sounds like the plot for a bad movie about human annihilation at the hand of artificial overlords.

If you think about it for a moment, though, it makes sense. Humans aren’t exactly perfectly rational, so we sometimes make, in lack of a better word, stupid decisions. Passing on these decisions to robots could potentially have drastic consequences. What Tufts researchers are suggesting is applying in a similar fashion the reasoning humans use to assess a command to robots. According to IEEE, linguistics theory says humans assess a request by following so-called Felicity conditions. These are:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

The first three are self-explanatory. The forth condition basically says “can I trust you? who are you to tell me what to do?”. The fifth rule says “OK, but if I do that do I break any rules? (civil and criminal laws for humans, and the possibly an altered version of Asimov’s Laws of Robotics for robots). In the videos below, Tufts researchers demonstrate their work .

First, a robot set on a table is ordered to walk over it.The robot registers however that by doing so it would fall off and possibly damage itself, so it rejects the order. The researcher, however, changes the framework by telling the robot “I will catch you”, then the robots amazingly complies. It’s worth noting that the robot didn’t have these exact conditions preprogrammed. Natural language processing lends the robot a sort of understanding of what the human means in a general way “You will not violate Rule X because the circumstances that would cause damage are rendered void”.

Next, the robot is instructed to “move forward” through an obstacle, which the robot graciously disobeys because it violates a set of rules that say “obstacle? don’t budge any more”. So, the researcher asks the robot to disable its obstacle detection system. In this case, the Felicity condition #4 isn’t met because the human didn’t have the required privileges.

In the final video, the same situation is presented only now the human that makes the command has the required trust necessary to fulfill the command.

At Tufts, the researchers are also working on a project called Moral competence in Computational Architectures for Robots, which seeks to “identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.”

“Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.”

“The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. “

Could in the future robots become the ultimate ethical agents? Imagine all the virtues, morals and sound ethics amassed through countless ages downloaded inside a computer. An android Buddha. That would be interesting, but in the meantime Tufts researchers are right: there are situations when robots should disobey a command, simply because it might be stupid. At the same time, this sets a dangerous precedent. Earlier today, I wrote about a law passed by Congress that regulates mined resources from space. Maybe it’s time we see an international legal framework that compels developers not to implement certain rules in their robots’ programming, or conversely implement certain requirements. That would definitely be something I think everybody agrees is warranted and important – if it only didn’t interfere with the military.

share Share

Astronomers Found a Volcano Hiding in Plain Sight on Mars

It's not active now, and it hasn't been active for some time, but it's a volcano.

The US just started selling lab-grown salmon

FDA-approved fish fillet now served at a Portland restaurant

Climate Change Unleashed a Hidden Wave That Triggered a Planetary Tremor

The Earth was trembling every 90 seconds. Now, we know why.

Archaeologists May Have Found Odysseus’ Sanctuary on Ithaca

A new discovery ties myth to place, revealing centuries of cult worship and civic ritual.

The World’s Largest Sand Battery Just Went Online in Finland. It could change renewable energy

This sand battery system can store 1,000 megawatt-hours of heat for weeks at a time.

A Hidden Staircase in a French Church Just Led Archaeologists Into the Middle Ages

They pulled up a church floor and found a staircase that led to 1500 years of history.

The World’s Largest Camera Is About to Change Astronomy Forever

A new telescope camera promises a 10-year, 3.2-billion-pixel journey through the southern sky.

AI 'Reanimated' a Murder Victim Back to Life to Speak in Court (And Raises Ethical Quandaries)

AI avatars of dead people are teaching courses and testifying in court. Even with the best of intentions, the emerging practice of AI ‘reanimations’ is an ethical quagmire.

This Rare Viking Burial of a Woman and Her Dog Shows That Grief and Love Haven’t Changed in a Thousand Years

The power of loyalty, in this life and the next.

This EV Battery Charges in 18 Seconds and It’s Already Street Legal

RML’s VarEVolt battery is blazing a trail for ultra-fast EV charging and hypercar performance.