Once a backwater of computer science, artificial intelligence has finally permeated day to day technology after decades worth of undeserved hype. AI now powers Google’s search, Facebook’s face recognition algorithms, Tesla’s self driving features and – as of now – thousands of services and software feed big data into self-learning machines. Given ever expanding computing power and developments in computer science, some experts claim that we’re officially in the dawn of a new age of computing – the age of artificial intelligence. Some are more bombastic in their predictions, like the famous Ray Kurzweil who believes  in 15 years’ time computers will be more intelligent than we are and will be able to understand what we say, learn from experience, make jokes, tell stories and even flirt. But with great power, comes great responsibility and it’s up to us to make sure that whatever superintelligence is advanced from the bellows of a global mainframe will be there to serve and benefit humans.

(graphic illustration provided by T2 Telehealth and Technology)

(graphic illustration provided by T2 Telehealth and Technology)

Some notable entrepreneurs and scientists, like Bill Gates, Elon Musk or Stephen Hawking have repeatedly expressed their concerns in this respect. Now, Musk who once said “AI is potentially more dangerous than nukes,” has joined other entrepreneurs, researchers and engineers who have founded the a new non-profit called  OpenAI. The organization will work on developing AI  to solve problems such as recognising faces or translating languages. The ultimate goal is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” an OpenAI statement reads.

RELATED  Insect damage to crops will drastically increase due to climate change

Besides Musk, founders and key members of the new non-profit include Peter Thiel (co-founder of PayPal Holdings Inc. and Palantir Technologies Inc.)  Sam Altman (president of Y Combinator, and co-chairman of the newly formed OpenAI), Greg Brockman, formerly the CTO of Stripe, a startup valued in excess of $1 billion (now, OpenAI CTO),  Ilya Sutskever, formerly with Google,  Yoshua Bengio, one of the founding figures of a powerful form of artificial intelligence called Deep Learning, and Alan Kay, a famous American computer scientist.

On Medium, Steven Levy published a great interview series with Musk and Altman about this most recent venture.

How did this come about?

Sam Altman: We launched YC Research about a month and a half ago, but I had been thinking about AI for a long time and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we are creating OpenAI. The organization is trying to develop a human positive AI. And because it’s a non-profit, it will be freely owned by the world.

Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.

And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.

OpenAI said it expects to spend a “tiny fraction” of the $1bn in the next few years. The thinking is give very smart and creative people the latitude, time and money to develop whatever they want, without corporate pressure to develop a product, and you’ll eventually come up with a breakthrough.

“Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest,” the OpenAI team said in a statement.

Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now!

Estimate my solar savings!