Quantcast
ZME Science
  • News
  • Environment
  • Health
  • Future
  • Space
  • Features
    Menu
    Natural Sciences
    Health
    History & Humanities
    Space & Astronomy
    Technology
    Culture
    Resources
    Natural Sciences

    Physics

    • Matter and Energy
    • Quantum Mechanics
    • Thermodynamics

    Chemistry

    • Periodic Table
    • Applied Chemistry
    • Materials
    • Physical Chemistry

    Biology

    • Anatomy
    • Biochemistry
    • Ecology
    • Genetics
    • Microbiology
    • Plants and Fungi

    Geology and Paleontology

    • Planet Earth
    • Earth Dynamics
    • Rocks and Minerals
    • Volcanoes
    • Dinosaurs
    • Fossils

    Animals

    • Mammals
    • Birds
    • Fish
    • Reptiles
    • Amphibians
    • Invertebrates
    • Pets
    • Conservation
    • Animals Facts

    Climate and Weather

    • Climate Change
    • Weather and Atmosphere

    Geography

    Mathematics

    Health
    • Drugs
    • Diseases and Conditions
    • Human Body
    • Mind and Brain
    • Food and Nutrition
    • Wellness
    History & Humanities
    • Anthropology
    • Archaeology
    • Economics
    • History
    • People
    • Sociology
    Space & Astronomy
    • The Solar System
    • The Sun
    • The Moon
    • Planets
    • Asteroids, Meteors and Comets
    • Astronomy
    • Astrophysics
    • Cosmology
    • Exoplanets and Alien Life
    • Spaceflight and Exploration
    Technology
    • Computer Science & IT
    • Engineering
    • Inventions
    • Sustainability
    • Renewable Energy
    • Green Living
    Culture
    • Culture and Society
    • Bizarre Stories
    • Lifestyle
    • Art and Music
    • Gaming
    • Books
    • Movies and Shows
    Resources
    • How To
    • Science Careers
    • Metascience
    • Fringe Science
    • Science Experiments
    • School and Study
    • Natural Sciences
    • Health
    • History and Humanities
    • Space & Astronomy
    • Culture
    • Technology
    • Resources
  • Reviews
  • More
    • Agriculture
    • Anthropology
    • Biology
    • Chemistry
    • Electronics
    • Geology
    • History
    • Mathematics
    • Nanotechnology
    • Economics
    • Paleontology
    • Physics
    • Psychology
    • Robotics
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Privacy Policy
    • Contact
No Result
View All Result
ZME Science

No Result
View All Result
ZME Science

Home → Science → News

Scientists are teaching robots to say ‘No’ to commands. Is that a good thing?

Researchers at Tufts alter the laws of robotics to teach robots to say "no".

Tibi Puiu by Tibi Puiu
November 16, 2020
in News, Robotics, Technology

laws robotics

In the 1940s when real robots, let alone artificial intelligence, weren’t around, famed SciFi author Isaac Asimov set forth a set of laws known as the “Three Laws of Robotics”. These state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Above all else, it seems, a robot must obey a human at all times, except when such an order might harm a human. Researchers at Tufts University Human-Robot Interaction Lab, however, use their own set of rules – one where robots can reject a command. That sounds like the plot for a bad movie about human annihilation at the hand of artificial overlords.

If you think about it for a moment, though, it makes sense. Humans aren’t exactly perfectly rational, so we sometimes make, in lack of a better word, stupid decisions. Passing on these decisions to robots could potentially have drastic consequences. What Tufts researchers are suggesting is applying in a similar fashion the reasoning humans use to assess a command to robots. According to IEEE, linguistics theory says humans assess a request by following so-called Felicity conditions. These are:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

The first three are self-explanatory. The forth condition basically says “can I trust you? who are you to tell me what to do?”. The fifth rule says “OK, but if I do that do I break any rules? (civil and criminal laws for humans, and the possibly an altered version of Asimov’s Laws of Robotics for robots). In the videos below, Tufts researchers demonstrate their work .

First, a robot set on a table is ordered to walk over it.The robot registers however that by doing so it would fall off and possibly damage itself, so it rejects the order. The researcher, however, changes the framework by telling the robot “I will catch you”, then the robots amazingly complies. It’s worth noting that the robot didn’t have these exact conditions preprogrammed. Natural language processing lends the robot a sort of understanding of what the human means in a general way “You will not violate Rule X because the circumstances that would cause damage are rendered void”.

Next, the robot is instructed to “move forward” through an obstacle, which the robot graciously disobeys because it violates a set of rules that say “obstacle? don’t budge any more”. So, the researcher asks the robot to disable its obstacle detection system. In this case, the Felicity condition #4 isn’t met because the human didn’t have the required privileges.

In the final video, the same situation is presented only now the human that makes the command has the required trust necessary to fulfill the command.

At Tufts, the researchers are also working on a project called Moral competence in Computational Architectures for Robots, which seeks to “identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.”

“Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.”

“The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. “

Could in the future robots become the ultimate ethical agents? Imagine all the virtues, morals and sound ethics amassed through countless ages downloaded inside a computer. An android Buddha. That would be interesting, but in the meantime Tufts researchers are right: there are situations when robots should disobey a command, simply because it might be stupid. At the same time, this sets a dangerous precedent. Earlier today, I wrote about a law passed by Congress that regulates mined resources from space. Maybe it’s time we see an international legal framework that compels developers not to implement certain rules in their robots’ programming, or conversely implement certain requirements. That would definitely be something I think everybody agrees is warranted and important – if it only didn’t interfere with the military.

Was this helpful?


Thanks for your feedback!

Related posts:
  1. Popular voice assistants like Siri or Alexa easily hacked with ultrasonic commands
  2. ESA transmits ExoMars’ landing commands, eagerly awaits the event
  3. Talkative robots make humans chat too — especially robots that show ‘vulnerability’
  4. Japan plans a Moon base by 2020, built by the robots, for the robots
  5. Teaching a robot how to sword fight might support safety advances
Tags: robotrobotics

ADVERTISEMENT
  • News
  • Environment
  • Health
  • Future
  • Space
  • Features
  • Reviews
  • More
  • About Us

© 2007-2021 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • News
  • Environment
  • Health
  • Future
  • Space
  • Features
    • Natural Sciences
    • Health
    • History and Humanities
    • Space & Astronomy
    • Culture
    • Technology
    • Resources
  • Reviews
  • More
    • Agriculture
    • Anthropology
    • Biology
    • Chemistry
    • Electronics
    • Geology
    • History
    • Mathematics
    • Nanotechnology
    • Economics
    • Paleontology
    • Physics
    • Psychology
    • Robotics
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Privacy Policy
    • Contact

© 2007-2021 ZME Science - Not exactly rocket science. All Rights Reserved.

Don’t you want to get smarter every day?

YES, sign me up!

Over 35,000 subscribers can’t be wrong. Don’t worry, we never spam. By signing up you agree to our privacy policy.

✕
ZME Science News

FREE
VIEW