Google’s DeepMind algorithm just got one step closer to behaving like a human.
DeepMind has gotten scarily smart in recent months – and we’re not just talking “computer smart,” but more like “human smart.” Just in 2016 the algorithm defeated the world champion at Go – something which was thought to be impossible as Go is hugely more complex than chess – by not only learning from previous games, but also by developing its own, completely novel strategies.
You see, unlike IBM’s Deep Blue or Watson, which were developed with a predefined purpose, DeepMind has no purpose. It simply learns from its surroundings, takes in information, and develops its own approach to different tasks. Technically, that’s called deep learning on a convolutional neural network. For instance, without altering the code in any way, DeepMind quickly learned how to play simple arcade games such as Space Invaders just by observing the game.
But that’s not all the algorithm does. Google let it in charge of its electricity management, and saved a whopping 15% of all costs – they believe many other companies and even households could benefit from it. Currently, the algorithm is involved in two other projects: with the Moorfields Eye Hospital, to search for early signs of diseases leading to blindness, and with the University College London Hospital to automatically differentiate between healthy and cancerous cells in the head and neck areas.
DeepMind is so smart that engineers working with it are considering installing a kill switch in case the algorithm goes rogue. But in the meantime, they’re teaching it how to ride the subway.
The London tube
The London subway, or the Tube as locals call it, is one of the most complex in the world. As any tourist will gladly tell you, it’s not easy to navigate it and you can easily get lost in the traffic and the rush. DeepMind didn’t know anything about the subway or about navigating public transportation. To make it navigate a subway model they had to think outside the box and so they did.
They developed a neural network which works with an external memory. The combination allows the neural network not only to learn, but to use memory to store and recall facts to make inferences like a conventional algorithm. This is exactly the type of technology which allows it to learn how to accomplish tasks without previous knowledge. Again – for a pre-programmed algorithm, this would be an easy task. But for an algorithm without any related programming, it’s a whole new ball game.
This has intriguing consequences. Sure, going from Picadilly to Westminster is not going to change the world, but it does indicate that algorithms are starting to become more and more human-like. Where DeepMind will take off next is anyone’s guess.
The paper was published in Nature on 12 October. Journal Reference: Hybrid computing using a neural network with dynamic external memory.
Was this helpful?