ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Research → Technology

Google scientists propose adding a ‘kill switch’ for A.I.

When in danger of A.I. overlords, press the big red button.

Tibi PuiubyTibi Puiu
June 8, 2016
in News, Technology
A A
Share on FacebookShare on TwitterSubmit to Reddit

Researchers from Alphabet’s DeepMind, the company Google bought for $500 million and which made headlines when their engineers made an A.I. that beat the world’s Go champion, are taking artificial intelligence threats very seriously. They propose adding a sort of kill switch that should prevent an A.I. from going rogue and potentially cause enormous damage.

red button
Credit: News Dumper

Nick Bostrom, a thin, soft-spoken Swede is the biggest A.I. alarmist in the world. His seminal book Superintelligence warns that once artificial intelligence surpasses human intelligence, we might be in for trouble. One famous example Bostrom talks about in his book is the ultimate paper clip manufacturing machine. Bostrom argues that as the machine become smarter and more powerful, it will begin to devise all sorts of clever ways to convert any material into paper clips — that includes humans too. Unless we teach it human values.

This extreme scenario is, first of all, decades away, but given the potential consequences can be catastrophic a lot of really smart people are taking this very seriously. People like Bill Gates, Elon Musk or physicist Stephen Hawking have all expressed their concerns, and have donated tens of millions for programs aimed at nurturing benign artificial intelligence that prizes human values.

The thing about artificial intelligence though is that it can also accelerate human technological progress at an unprecedented scale. For this to happen, we humans also have to give it a bit of leeway, a free hand. You can’t just hardcode all sorts of restrictions, because you’d just end up with a plain ol’ software program.

DeepMind engineers are proposing something different. Instead of coding restrictions, they suggest a framework that will make it impossible for machines to ignore turn-off commands.

“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the authors wrote in the paper. “If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.”

That’s pretty reassuring, although Laurent Orseau, from Google DeepMind and of the lead authors of the paper, commented “no system is ever going to be foolproof. It is a matter of making it as good as possible, and this is one of the first steps,” he added.

A.I. has the potential to eradicate disease, solve our food and energy problems and lead to unimaginable developments in science. In short, A.I. might save the world. It could also doom it. Maybe one ominous day, a big red button designed in 2016 will avert a calamity.

RelatedPosts

AI debates its own ethics at Oxford University, concludes the only way to be safe is “no AI at all”
Human thought has a speed limit — and it’s surprisingly slow
This DALL-E mini AI can create original digital paintings of anything — so why is it obsessed with women in saris?
Google’s Deepmind artificial intelligence learns to be greedy and aggressive when resources are scarce
Tags: artificial intelligence

ShareTweetShare
Tibi Puiu

Tibi Puiu

Tibi is a science journalist and co-founder of ZME Science. He writes mainly about emerging tech, physics, climate, and space. In his spare time, Tibi likes to make weird music on his computer and groom felines. He has a B.Sc in mechanical engineering and an M.Sc in renewable energy systems.

Related Posts

Inventions

China’s New Mosquito Drone Could Probably Slip Through Windows and Spy Undetected

byMihai Andrei
7 days ago
Future

Your Brain Could Reveal a Deadly Heart Risk. AI Is Learning to Read the Signs

byMihai Andrei
1 week ago
Future

Can you upload a human mind into a computer? Here’s what a neuroscientist has to say about it

byDobromir Rahnev
1 month ago
AI-generated image.
Future

Does AI Have Free Will? This Philosopher Thinks So

byMihai Andrei
2 months ago

Recent news

What’s Seasonal Body Image Dissatisfaction and How Not to Fall into Its Trap

June 28, 2025

Why a 20-Minute Nap Could Be Key to Unlocking ‘Eureka!’ Moments Like Salvador Dalí

June 28, 2025

The world’s oldest boomerang is even older than we thought, but it’s not Australian

June 27, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.