homehome Home chatchat Notifications


Scientists are teaching robots to say 'No' to commands. Is that a good thing?

Researchers at Tufts alter the laws of robotics to teach robots to say "no".

Tibi Puiu
November 27, 2015 @ 2:07 pm

share Share

laws robotics

In the 1940s when real robots, let alone artificial intelligence, weren’t around, famed SciFi author Isaac Asimov set forth a set of laws known as the “Three Laws of Robotics”. These state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Above all else, it seems, a robot must obey a human at all times, except when such an order might harm a human. Researchers at Tufts University Human-Robot Interaction Lab, however, use their own set of rules – one where robots can reject a command. That sounds like the plot for a bad movie about human annihilation at the hand of artificial overlords.

If you think about it for a moment, though, it makes sense. Humans aren’t exactly perfectly rational, so we sometimes make, in lack of a better word, stupid decisions. Passing on these decisions to robots could potentially have drastic consequences. What Tufts researchers are suggesting is applying in a similar fashion the reasoning humans use to assess a command to robots. According to IEEE, linguistics theory says humans assess a request by following so-called Felicity conditions. These are:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

The first three are self-explanatory. The forth condition basically says “can I trust you? who are you to tell me what to do?”. The fifth rule says “OK, but if I do that do I break any rules? (civil and criminal laws for humans, and the possibly an altered version of Asimov’s Laws of Robotics for robots). In the videos below, Tufts researchers demonstrate their work .

First, a robot set on a table is ordered to walk over it.The robot registers however that by doing so it would fall off and possibly damage itself, so it rejects the order. The researcher, however, changes the framework by telling the robot “I will catch you”, then the robots amazingly complies. It’s worth noting that the robot didn’t have these exact conditions preprogrammed. Natural language processing lends the robot a sort of understanding of what the human means in a general way “You will not violate Rule X because the circumstances that would cause damage are rendered void”.

Next, the robot is instructed to “move forward” through an obstacle, which the robot graciously disobeys because it violates a set of rules that say “obstacle? don’t budge any more”. So, the researcher asks the robot to disable its obstacle detection system. In this case, the Felicity condition #4 isn’t met because the human didn’t have the required privileges.

In the final video, the same situation is presented only now the human that makes the command has the required trust necessary to fulfill the command.

At Tufts, the researchers are also working on a project called Moral competence in Computational Architectures for Robots, which seeks to “identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.”

“Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.”

“The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. “

Could in the future robots become the ultimate ethical agents? Imagine all the virtues, morals and sound ethics amassed through countless ages downloaded inside a computer. An android Buddha. That would be interesting, but in the meantime Tufts researchers are right: there are situations when robots should disobey a command, simply because it might be stupid. At the same time, this sets a dangerous precedent. Earlier today, I wrote about a law passed by Congress that regulates mined resources from space. Maybe it’s time we see an international legal framework that compels developers not to implement certain rules in their robots’ programming, or conversely implement certain requirements. That would definitely be something I think everybody agrees is warranted and important – if it only didn’t interfere with the military.

share Share

Scientists Crack the Secret Behind Jackson Pollock’s Vivid Blue in His Most Famous Drip Painting

Chemistry reveals the true origins of a color that electrified modern art.

China Now Uses 80% Artificial Sand. Here's Why That's A Bigger Deal Than It Sounds

No need to disturb water bodies for sand. We can manufacture it using rocks or mining waste — China is already doing it.

Over 2,250 Environmental Defenders Have Been Killed or Disappeared in the Last 12 Years

The latest tally from Global Witness is a grim ledger. In 2024, at least 146 people were killed or disappeared while defending land, water and forests. That brings the total to at least 2,253 deaths and disappearances since 2012, a steady toll that turns local acts of stewardship into mortal hazards. The organization’s report reads less like […]

After Charlie Kirk’s Murder, Americans Are Asking If Civil Discourse Is Even Possible Anymore

Trying to change someone’s mind can seem futile. But there are approaches to political discourse that still matter, even if they don’t instantly win someone over.

Climate Change May Have Killed More Than 16,000 People in Europe This Summer

Researchers warn that preventable heat-related deaths will continue to rise with continued fossil fuel emissions.

New research shows how Trump uses "strategic victimhood" to justify his politics

How victimhood rhetoric helped Donald Trump justify a sweeping global trade war

Biggest Modern Excavation in Tower of London Unearths the Stories of the Forgotten Inhabitants

As the dig deeper under the Tower of London they are unearthing as much history as stone.

Millions Of Users Are Turning To AI Jesus For Guidance And Experts Warn It Could Be Dangerous

AI chatbots posing as Jesus raise questions about profit, theology, and manipulation.

Can Giant Airbags Make Plane Crashes Survivable? Two Engineers Think So

Two young inventors designed an AI-powered system to cocoon planes before impact.

First Food to Boost Immunity: Why Blueberries Could Be Your Baby’s Best First Bite

Blueberries have the potential to give a sweet head start to your baby’s gut and immunity.