homehome Home chatchat Notifications


Google scientists propose adding a 'kill switch' for A.I.

When in danger of A.I. overlords, press the big red button.

Tibi Puiu
June 8, 2016 @ 8:31 pm

share Share

Researchers from Alphabet’s DeepMind, the company Google bought for $500 million and which made headlines when their engineers made an A.I. that beat the world’s Go champion, are taking artificial intelligence threats very seriously. They propose adding a sort of kill switch that should prevent an A.I. from going rogue and potentially cause enormous damage.

red button

Credit: News Dumper

Nick Bostrom, a thin, soft-spoken Swede is the biggest A.I. alarmist in the world. His seminal book Superintelligence warns that once artificial intelligence surpasses human intelligence, we might be in for trouble. One famous example Bostrom talks about in his book is the ultimate paper clip manufacturing machine. Bostrom argues that as the machine become smarter and more powerful, it will begin to devise all sorts of clever ways to convert any material into paper clips — that includes humans too. Unless we teach it human values.

This extreme scenario is, first of all, decades away, but given the potential consequences can be catastrophic a lot of really smart people are taking this very seriously. People like Bill Gates, Elon Musk or physicist Stephen Hawking have all expressed their concerns, and have donated tens of millions for programs aimed at nurturing benign artificial intelligence that prizes human values.

The thing about artificial intelligence though is that it can also accelerate human technological progress at an unprecedented scale. For this to happen, we humans also have to give it a bit of leeway, a free hand. You can’t just hardcode all sorts of restrictions, because you’d just end up with a plain ol’ software program.

DeepMind engineers are proposing something different. Instead of coding restrictions, they suggest a framework that will make it impossible for machines to ignore turn-off commands.

“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the authors wrote in the paper. “If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.”

That’s pretty reassuring, although Laurent Orseau, from Google DeepMind and of the lead authors of the paper, commented “no system is ever going to be foolproof. It is a matter of making it as good as possible, and this is one of the first steps,” he added.

A.I. has the potential to eradicate disease, solve our food and energy problems and lead to unimaginable developments in science. In short, A.I. might save the world. It could also doom it. Maybe one ominous day, a big red button designed in 2016 will avert a calamity.

share Share

After Charlie Kirk’s Murder, Americans Are Asking If Civil Discourse Is Even Possible Anymore

Trying to change someone’s mind can seem futile. But there are approaches to political discourse that still matter, even if they don’t instantly win someone over.

Climate Change May Have Killed More Than 16,000 People in Europe This Summer

Researchers warn that preventable heat-related deaths will continue to rise with continued fossil fuel emissions.

New research shows how Trump uses "strategic victimhood" to justify his politics

How victimhood rhetoric helped Donald Trump justify a sweeping global trade war

Biggest Modern Excavation in Tower of London Unearths the Stories of the Forgotten Inhabitants

As the dig deeper under the Tower of London they are unearthing as much history as stone.

Millions Of Users Are Turning To AI Jesus For Guidance And Experts Warn It Could Be Dangerous

AI chatbots posing as Jesus raise questions about profit, theology, and manipulation.

Can Giant Airbags Make Plane Crashes Survivable? Two Engineers Think So

Two young inventors designed an AI-powered system to cocoon planes before impact.

First Food to Boost Immunity: Why Blueberries Could Be Your Baby’s Best First Bite

Blueberries have the potential to give a sweet head start to your baby’s gut and immunity.

Ice Age People Used 32 Repeating Symbols in Caves Across the World. They May Reveal the First Steps Toward Writing

These simple dots and zigzags from 40,000 years ago may have been the world’s first symbols.

NASA Found Signs That Dwarf Planet Ceres May Have Once Supported Life

In its youth, the dwarf planet Ceres may have brewed a chemical banquet beneath its icy crust.

Nudists Are Furious Over Elon Musk's Plan to Expand SpaceX Launches in Florida -- And They're Fighting Back

A legal nude beach in Florida may become the latest casualty of the space race