ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Science → News

New model boils morality down to three elements, aims to impart them to AI

Now computers can feel guilty too, hooray!

Alexandru MicubyAlexandru Micu
October 3, 2018
in News, Psychology, Studies
A A
Share on FacebookShare on TwitterSubmit to Reddit

How should a computer go about telling right from wrong?

Ethics.
Image credits Mark Morgan / Flickr.

According to a team of US researchers, a lot of factors come into play — but most people go through the same steps when making snap moral judgments. Based on these observations, the team has created a framework model to help our AI friends tell right from wrong even in complex settings.

Lying is bad — usually

“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, a neuroethics researcher at North Carolina State University and lead author of the study.

“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model — and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.”

So what’s so special about the ADC model? Well, the team explains that it can be used to determine what constitutes as moral or immoral even in tricky situations. For example, most of us would agree that lying isn’t moral. However, we’d probably (hopefully) also agree that lying to Nazis about the location of a Jewish family is solidly moral. The action itself — lying — can thus take various shades of ‘moral’ depending on the context.

We humans tend to have an innate understanding of this mechanism and assess the morality of an action based on our life experience. In order to understand the rules of the game and later impart them to our computers, the team developed the ADC model.

Boiled down, the model posits that people look to three things when assessing morality: the agent (the person who is doing something), the action in question, and the consequence (or outcome) of the action. Using this approach, researchers say, one can explain why lying can be a moral action. On the flipside, the ADC model also shows that telling the truth can, in fact, be immoral (if it is “done maliciously and causes harm,” Dubljević says).

“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”

In order to test their model, the team pitted it against a series of scenarios. These situations were designed to be logical, realistic, and easily understood by both professional philosophers as well as laymen, the team explains. All scenarios were evaluated by a group of 141 philosophers with training in ethics prior to their use in the study.

In the first part of the trials, 528 participants from across the U.S. were asked to evaluate some of these scenarios in which the stakes were low — i.e. possible outcomes weren’t dire. During the second part, 786 participants were asked to evaluate more dire scenarios among the ones developed by the team — those that could result in severe harm, injury, or death.

RelatedPosts

AI Designs Computer Chips We Can’t Understand — But They Work Really Well
Real photo wins award in AI-generated photo competition and it’s all the irony we needed
AI outperforms top corporate lawyers in accuracy — and is 100 times faster
The secret to van Gogh’s success and other hot streaks? Creative exploration

When the stakes were low, the nature of the action itself was the strongest factor in determining the morality of a given situation. What mattered most in such situations, in other words, was whether a hypothetical individual was telling the truth or not — the outcome, be it good or bad, was secondary.

When the stakes were high, outcome took center stage. It was more important, for example, to save a passenger from dying in a plane crash than the actions (be them good or bad) one took to reach this goal.

“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.

One of the key findings of the study was that philosophers and the general public assess morality in similar ways, suggesting that there is a common structure to moral intuition — one which we instinctively use, regardless of whether we’ve had any training in ethics. In other words, everyone makes snap moral judgments in a similar way.

“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”

The paper “Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment” has been published in the journal PLOS ONE.

Tags: Agent Deed ConsequenceAIartificial intelligenceethicsmorality

Share24TweetShare
Alexandru Micu

Alexandru Micu

Stunningly charming pun connoisseur, I have been fascinated by the world around me since I first laid eyes on it. Always curious, I'm just having a little fun with some very serious science.

Related Posts

Art

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

byTibi Puiu
2 days ago
Future

The Real Singularity: AI Memes Are Now Funnier, On Average, Than Human Ones

byRupendra Brahambhatt
2 days ago
News

Big Tech Said It Was Impossible to Create an AI Based on Ethically Sourced Data. These Researchers Proved Them Wrong

byMihai Andrei
3 days ago
Future

Everyone Thought ChatGPT Used 10 Times More Energy Than Google. Turns Out That’s Not True

byTibi Puiu
4 days ago

Recent news

This Plastic Dissolves in Seawater and Leaves Behind Zero Microplastics

June 14, 2025

Women Rate Women’s Looks Higher Than Even Men

June 14, 2025

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

June 13, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.