homehome Home chatchat Notifications


Killer AI? Let’s Solve the Smaller Problems First

Tesla Motors’ Elon Musk has said that our civilization is dangerously close to encountering AI problems within a “five-year timeframe, 10 years at most.” He made the comment on the website Edge.org shortly before deleting it. His point was that, sometime soon, we may actually create a form of artificial intelligence that decides to rise […]

Daniel Faris
July 9, 2015 @ 10:35 am

share Share

robot mural

Tesla Motors’ Elon Musk has said that our civilization is dangerously close to encountering AI problems within a “five-year timeframe, 10 years at most.” He made the comment on the website Edge.org shortly before deleting it.

His point was that, sometime soon, we may actually create a form of artificial intelligence that decides to rise up and wipe out the human race, a la Skynet from the Terminator films. At the very least, he believes that it may cause us harm in some way. Musk clearly stated that AI could potentially be “more dangerous than nukes.”

That may seem far-fetched for now, but he believes it’s possible—so much so that he’s actively working to prevent it. If there’s anyone that can keep it from happening, it’s Musk. He’s responsible for Paypal, Tesla Motors, and SpaceX—all of which are incredibly innovative in their respective industries.

But Musk is not just running around blindly through the streets screaming that the sky is falling; he’s actually trying to stop it from becoming a reality. Meanwhile, other experts are calling attention to the less movie-worthy problems posed by AI and automation: things like massive unemployment, to name just one.

How Elon Musk Plans to Save Us from Killer AI

Elon Musk recently donated millions of dollars to the Future of Life Institute—well over $10 million, to be exact. It’s not the money that’s going to save us, though; it’s how the institute plans to use the funds.

They are conducting research that will work toward optimizing AI that is “robust and beneficial,” as opposed to dangerous and harmful. Musk has long supported the idea that future AI technologies must become just as beneficial as they are capable. If the power of AI exceeds its usefulness to the human race, that’s when we may run into significant problems. And it seems the Future of Life Institute is right behind him in that belief.

As for the funds, they will be doled out to 37 teams, all of which will be conducting different research on AI. The 37 teams chosen were narrowed down from an initial total of about 300.

They will be studying how to teach AI to better understand humans, how to align robot and AI interests with humanity’s, and find ways to ensure that AI will always be under our control. That latter point is important, because so long as we maintain control, it’s highly unlikely that AI will be able to “rise up” against us.

If you’re interested in seeing the complete list of those grant winners, the institute has announced them publicly via their official site.

What Are the Real Problems?

Despite their commitment to reigning in AI, the Institute claims that discussions furthering the concept of “killer AI” detracts from the more immediate problems at hand. Max Tegmark, President of the Future of Life Institute, believes that there are more modern issues bubbling to the surface, such as the economic impact of AI replacing modern jobs.

And what might that economic impact look like? To begin with, all kinds of heavy machinery that currently requires a human operator may soon be replaced with autonomous machines. The upside, of course, is that the world might see a dramatic reversal in workplace injury trends, since, if there’s one thing we can say for sure about autonomous technologies, it’s that they’re much less prone to error than their human counterparts. And those errors cost U.S. businesses about $1 billion per week in workers’ compensation payouts.

And then we’ll need to consider the real societal cost of suddenly having millions fewer jobs in the U.S. The taxi and trucking industries are particularly vulnerable; AI-powered autonomous cars are looking less and less like a pipe-dream and more like a very real possibility. With the possibility of massive unemployment looming, many are turning to a revolutionary, but economically viable, solution known as a Universal Basic Income. It’s gaining traction in some of the Nordic countries and even portions of Canada, but we have yet to see an American politician brave enough to broach the topic to Congress.

Regardless, these is the delicate trade-offs that will define future research into automation and artificial intelligence. Says Max Tegmark:

“The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI. We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.” That said, part of the plan is to also research preventative measures to keep AI from becoming such a “deadly” problem.

The good news is that this research and work is being done before the birth of a more powerful AI. It makes more sense to at least prepare for the scenario beforehand—that way we don’t run into any problems in the future. Of course, there’s always the question of whether or not this work will be done in time. It’s possible a more advanced form of AI will come into fruition within the next few years. Only time will tell.

Image Credit: jlmaral (via Flickr)

share Share

Big Tech Said It Was Impossible to Create an AI Based on Ethically Sourced Data. These Researchers Proved Them Wrong

A massive AI breakthrough built entirely on public domain and open-licensed data

Lawyers are already citing fake, AI-generated cases and it's becoming a problem

Just in case you're wondering how society is dealing with AI.

Leading AI models sometimes refuse to shut down when ordered

Models trained to solve problems are now learning to survive—even if we tell them not to.

AI slop is way more common than you think. Here's what we know

The odds are you've seen it too.

Scientists Invented a Way to Store Data in Plastic Molecules and It Could Someday Replace Hard Drives

What if your next hard drive wasn’t a box, but a string of molecules? Synthetic polymers promises to revolutionize data storage.

Meet Cavorite X7: An aircraft that can hover like a helicopter and fly like a plane

This unusual hybrid aircraft has sliding panels on its wings that cover hidden electric fans.

AI is quietly changing how we design our work

AI reshapes engineering, from sketches to skyscrapers, promising speed, smarts, and new creations.

Inside the Great Firewall: China’s Relentless Battle to Control the Internet

On the Chinese internet, a river crab isn’t just a crustacean. It’s code. River crab are Internet slang terms created by Chinese netizens in reference to the Internet censorship, or other kinds of censorship in mainland China. They need to do this because the Great Firewall of China censors and regulates everything that is posted […]

Anthropic's new AI model (Claude) will scheme and even blackmail to avoid getting shut down

In a fictional scenario, Claude blackmailed an engineer for having an affair.

Grok Won’t Shut Up About “White Genocide” Conspiracy Theories — Even When Asked About HBO or Other Random Things

Regardless of the context Grok, it seems, is being used to actively push a topic onto its users.