homehome Home chatchat Notifications


Calculations suggest humanity has no chance of containing superintelligent machines

When we reach a certain threshold, a superintelligent machine will be too much for us to contain.

Mihai Andrei
January 14, 2022 @ 7:21 pm

share Share

If there’s one thing fiction has warned us of, it’s that if a rivalry between humans and machines breaks out, it’s unlikely to end well. A new study seems to confirm that idea.

We often hear that it’s not always clear just how Artificial Intelligence (AI) works. “We can build these models,” one researcher famously said a few years ago, “but we don’t know how they work”. It sounds weird, especially with it being such a hyped and intensely researched topic nowadays, but the ways of the AI are indeed sometimes murky, and sometimes, even the programmers behind these algorithms don’t always understand how a certain conclusion was reached.

It’s not uncommon for AI to come up with unusual, unexpected conclusions. Even when the conclusion itself is clear, why that specific conclusion was reached is not always clear. For instance, when a chess AI suggests a move, it’s not always clear why that is the best move, or what are the motivations behind that move. To make matters even dicier, self-teaching AIs also exist (one such AI mastered the sum of human chess knowledge in a matter of hours), which makes understanding these algorithms even more difficult.

While a machine is unlikely to be able to take over the world with its chess knowledge, it’s not hard to understand why researchers would be worried about this. Let’s circle back to the chess AI for a moment. It’s not just that it surpassed the sum of human knowledge in no time, but it’s also amassing new knowledge at a pace we can’t match. Basically, it’s going ahead of us more and more. What if the same thing happens with other AIs geared towards more practical problems?

For instance, mathematicians already use complex machine learning [subset] to help them analyze complex proofs; chemists use them to find new molecules. AIs monitor heating, detect diseases, they can even help the visually impaired — they’ve already entered the realm of reality. But as exciting as this is, it’s also a bit concerning.

It’s not hard to understand why a superintelligent AI, one that exceeds human knowledge and continues to teach itself new things beyond our grasp is concerning. Researchers from Berlin’s Institute for Human Development looked at how humans learn, and how we use what we’ve learned to build and teach machines to learn themselves.

“[T]here are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” study coauthor Manuel Cebrian explains. “The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

In the study by Cebrian and colleagues, researchers looked at whether or not we would be able to contain a hypothetical superintelligent AI. The short answer is ‘no’ — and even the longer answer isn’t too promising.

“We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible,” the paper reads.

Basically, if you have to fight a superintelligent AI, you could start by removing it from the internet and other sources of information — but this isn’t really a solution as it would hamper the AI’s ability to help humanity as well; if you’re cutting it off from the internet, why are you building it in the first place? If you learn that the AI has antagonized you, then it’s probably already made an information backup it can use already. Instead, the team looked at the possibility of building a theoretical containment algorithm that ensures a superintelligent AI can’t hurt humans — much like Isaac Asimov’s famous Three Laws of Robotics. However, the study found that under the current computational paradigm, such an algorithm simply can’t be built.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable”, says Iyad Rahwan, Director of the Center for Humans and Machines, for MPG.

Based on their calculations, the containment problem is simply incomputable based on what we know so far; no algorithm that we know of can determine whether an AI would do something “bad” or not (however that “bad” may be defined). We won’t even know when the AI is thinking about doing something bad. Furthermore, as a side effect of the same study, researchers claim we won’t even know when superintelligent AIs are here, as assessing whether a machine exhibits super-human intelligence is in the same realm as the containment problem.

So where does this leave us? Well, we’re spending great resources to train smart algorithms to do things we don’t fully understand, and we’re starting to sense that there will come a time when we won’t be able to contain them. Maybe — just maybe — it’s a sign that we should start thinking about AI more seriously, before it’s too late.

The study “Superintelligence cannot be contained: Lessons from Computability Theory“ was published in the Journal of Artificial Intelligence Research

share Share

AI Designs Computer Chips We Can't Understand — But They Work Really Well

Can we trust systems we don’t fully understand?

A Painter Found a 122-Year-Old Message in a Bottle Hidden in a Lighthouse in Tasmania

Hidden for 122 years, a message in a bottle is finally revealed.

These Male Tarantulas Have Developed Huge Sexual Organs to Survive Mating

Size really does matter in tarantula romance.

Scientists Say Junk Food Might Be as Addictive as Drugs

This is especially hurtful for kids.

A New AI Can Spot You by How Your Body Bends a Wi-Fi Signal

You don’t need a phone or camera to be tracked anymore: just wi-fi.

Golden Oyster Mushroom Are Invasive in the US. They're Now Wreaking Havoc in Forests

Golden oyster mushrooms, with their sunny yellow caps and nutty flavor, have become wildly popular for being healthy, delicious and easy to grow at home from mushroom kits. But this food craze has also unleashed an invasive species into the wild, and new research shows it’s pushing out native fungi. In a study we believe […]

The World’s Most "Useless" Inventions (That Are Actually Pretty Useful)

Every year, the Ig Nobel Prize is awarded to ten lucky winners. To qualify, you need to publish research in a peer-reviewed journal that is considered "improbable": studies that make people laugh and think at the same time.

This Ancient Greek City Was Swallowed by the Sea—and Yet Refused to Die

A 3,000-year record of resilience, adaptation, and seismic survival

Low testosterone isn't killing your libido. Sugar is

Small increases in blood sugar can affect sperm and sex, even without diabetes

NASA’s Parker Solar Probe Just Flew Closer to the Sun Than Ever Before and the Footage is Breathtaking

Closest-ever solar images offer new insights into Earth-threatening space weather.