ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Tech

Endowing AI with confidence and doubt will make it more useful, paper argues

More like us, in other words.

Alexandru MicubyAlexandru Micu
June 6, 2017
in News, Robotics, Tech
A A
Share on FacebookShare on TwitterSubmit to Reddit

Hard-wiring AI with confidence and self-doubt could help them better perform their task while recognizing when they need help or supervision, a team of researchers believes.

Smartphone.
Initial image credits Tero Vesalainen.

Confidence — that thing we all wish we had at parties but can thankfully be substituted with alcohol. Having confidence in one’s own abilities is generally considered to be a good thing, although, as it turns out from a certain presidency, too much of it and you annoy the whole planet. Which is an important point to discuss, given that we’re toying around with creating actual minds, in the form of AI. So would confidence, and it’s mirror twin doubt, prove of any use to a thinking machine?

That’s the question a team of researchers led by Dylan Hadfield-Menell from the University of California, Berkeley, set out to answer. We already know part of the answer — we know what happens when machines get over-confident, he says. A perfect example of this is Facebook’s newsfeed algorithms. They were designed to feed article and post suggestions which would match people’s interests based on what they click on or share. But by following these instructions to the letter, they end up filling some feeds with nothing but fake news. A sprinkling of self-doubt would’ve been a great boon in this case.

“If Facebook had this thinking, we might not have had such a problem with fake news,” says Hadfield-Menell.

The team believes the answer lies in human oversight. Instead of showing every article or post the algorithm thinks a Facebook user wants to see, a more uncertain system would be prompted to defer to a human referee in case a link smells fishy.

But knowing that doubt can help make our machines better at what they do isn’t the same as knowing how, and how much of it, should be implemented. So the team set up an experiment to determine how a robot’s sense of its own usefulness could be used in the creation of artificial intelligence.

The off-switch

The team designed a mathematical model of a human-robot interaction they call the “off-switch game” to see how a machine’s confidence levels would impact its interaction with us. Less of a game per se and more of a simulation, it basically consists of a robot with an off switch which is given a task to do. A human overseeing the bots can press this button at any time to stop the robot, but on the other hand, the robot can choose to disable this switch if it so desires.

Not very surprisingly, when the machine was given a high degree of confidence it would never allow the human player to switch it off. In contrast, a robot endowed with low confidence would always allow the player to shut it down, even if it was performing its task perfectly.

RelatedPosts

Scientists urge ban on AIs designed to predict crime, Minority Report-style
AI Is Changing Education — But Are We Keeping Up?
AI Experts Predict Machines Could Outthink Humans by 2040. But Some Are Betting on Even Sooner
Robot jellyfish that runs on hydrogen can swim forever in the ocean

Hadfield-Menell believes this is a good indication that we shouldn’t make AI’s too “insecure”. For example, if you task your autonomous car with taking the kids to school in the morning it should never let a child take control. In this case, the AI should be confident that its own ability is greater than that of the children and refuse to relinquish control. But if you were in the car and told it to stop, it should relinquish control. The best robots, he adds, will be those who can best balance these two extremes.

While the idea of a robot refusing a command to stop or shut down might seem a bit scary or far-fetched (and has been debated at large in the past), context is everything. Humans are fallible too, and you wouldn’t want a robotic firefighter to stop from saving someone or putting out a fire because it was ordered to, by mistake. Or a robotic nurse to stop treating a delirious patient who orders it to shut down. This confidence is a key part of AI operation and something we’ll have to consider before putting people and AIs side by side in the real world.

The issue is wider than simple confidence, however. As machines will be expected to make more and more decisions that directly impact human safety, it’s important that we put a solid ethical framework in place sooner rather than later, according to Hadfield-Menell. Next, he plans to see how a robot’s decision-making changes with access to more information regarding its own usefulness — for example, how a coffee-pot robot’s behavior might change in the morning if it knows that’s when it’s most useful. Ultimately, he wants his research to help create AIs that are more predictable and make decisions that are more intuitive to us humans.

The full paper “The Off-Switch Game” has been published in the journal arXiv.

Tags: AIartificial intelligenceConfidenceDoubtMachinesrobots

ShareTweetShare
Alexandru Micu

Alexandru Micu

Stunningly charming pun connoisseur, I have been fascinated by the world around me since I first laid eyes on it. Always curious, I'm just having a little fun with some very serious science.

Related Posts

Future

GPT-5 is, uhm, not what we expected. Has AI just plateaued?

byMichael Rovatsos
4 days ago
Health

AI Can Hear Cancer in the Voice Before Doctors Can Detect It

byMihai Andrei
6 days ago
Future

Illinois Just Became the First State to Ban AI From Acting as a Therapist

byTudor Tarita
1 week ago
Future

AI Designs Computer Chips We Can’t Understand — But They Work Really Well

byMihai Andrei
2 weeks ago

Recent news

The disturbing reason why Japan’s Olympic athletes wear outfits designed to block infrared

August 19, 2025
Erin Kunz holds a microelectrode array in the Clark Center, Stanford University, on Thursday, August 8, 2025, in Stanford, Calif. The array is implanted in the brain to collect data. (Photo by Jim Gensheimer)

Brain Implant Translates Silent Inner Speech into Words, But Critics Raise Fears of Mind Reading Without Consent

August 19, 2025

‘Skin in a Syringe’ Might be the Future of Scar Free Healing For Burn Victims

August 18, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.