homehome Home chatchat Notifications


Computer beats human at Go for the first time

In what seemed impossible just a few years ago, a computer has beaten a Go champion.

Mihai Andrei
January 29, 2016 @ 3:25 pm

share Share

In what seemed impossible just a few years ago, a computer has beaten a Go champion. Computer scientists in Google’s DeepMind division in the UK managed to achieve this feat, with their artificial intelligence (AI) defeating a human.

Ancient game, new players

Go is an ancient game. It was invented in China, over 2,500 years ago. Deceivingly simple in appearance, Go is actually an incredibly complex game. It is played by two people, who have black or white pieces. The goal is to surround more territory than the opponent. There is a great amount of theory and strategy involved in the game and the total number of moves in Go is estimated at 10761 compared for example to the estimated 10120 possible in chess. In other words, Go is much more complex than chess.

“The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves,” they write in the study.

For even more  terms of comparison, the total number of atoms in the universe is estimated at around 1080. Needless to say, trying to crack Go with a computer like you’d crack chess is not really going to work.

“Traditional AI methods – which construct a search tree over all possible positions – don’t have a chance in Go,” writes DeepMind founder Demis Hassabis in a Google blog post. “So when we set out to crack Go, we took a different approach.”

Their approach was to build a system that blends in a neural network and an advanced tree search. Their system, called AlphaGo, learned from some 30 million moves in games played by human experts, to the point where he could anticipate the opponent’s move with an accuracy of 57 percent of the time, beating the previous record of 44 percent.

Then, the next step was to have AlphaGo play against itself, adjusting its trial-and-error strategies by playing thousands and thousands of games against itself. After that, they had it play against other Go AIs, where it pretty much smashes the competition. But the first real showing was against reigning three-time European Go champion, Fan Hui. This is where it gets interesting.

Computer vs Human

In 2014, less than two years ago, Wired wrote this engaging article about Go, calling it “the mysterious game that computers can’t win”. They estimated that it will take another 10 years before computers overcome humans at this strategy game and it seemed like a reasonable claim. Computers were already beating humans at checkers in 1994, and in 1997 the Deep Blue AI famously beat world champion Garry Kasparov – but Go stood strong; until now.

The computer won 5 games without losing a single one. It played just as good as its opponent, and it didn’t make any mistakes.

“The problem is humans sometimes make very big mistakes, because we are human. Sometimes we are tired, sometimes we so want to win the game, we have this pressure,” Fan told Elizabeth Gibney at Nature, describing the match. “The programme is not like this. It’s very strong and stable, it seems like a wall. For me this is a big difference. I know AlphaGo is a computer, but if no one told me, maybe I would think the player was a little strange, but a very strong player, a real person.”

Now, there’s only one challenge for the Go AI – to play against South Korea’s Lee Sedol, considered the top Go player in the world. Whether or not it will defeat the Go champion, AlphaGo made its point: Go is still a limited-possibility game, and computers will overcome humans sooner rather than later. But for its creators, it’s not just the competition achievements, but rather the way in which it learned and became so good at the game.

“We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI,” writes Hassabis. “However, the most significant aspect of all this for us is that AlphaGo isn’t just an ‘expert’ system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go. While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems.”

Journal Reference: Mastering the game of Go with deep neural networks and tree search

 

share Share

Dinosaurs Were Doing Just Fine Before the Asteroid Hit

New research overturns the idea that dinosaurs were already dying out before the asteroid hit.

Denmark could become the first country to ban deepfakes

Denmark hopes to pass a law prohibiting publishing deepfakes without the subject's consent.

Archaeologists find 2,000-year-old Roman military sandals in Germany with nails for traction

To march legionaries across the vast Roman Empire, solid footwear was required.

Mexico Will Give U.S. More Water to Avert More Tariffs

Droughts due to climate change are making Mexico increasingly water indebted to the USA.

Chinese Student Got Rescued from Mount Fuji—Then Went Back for His Phone and Needed Saving Again

A student was saved two times in four days after ignoring warnings to stay off Mount Fuji.

The perfect pub crawl: mathematicians solve most efficient way to visit all 81,998 bars in South Korea

This is the longest pub crawl ever solved by scientists.

This Film Shaped Like Shark Skin Makes Planes More Aerodynamic and Saves Billions in Fuel

Mimicking shark skin may help aviation shed fuel—and carbon

China Just Made the World's Fastest Transistor and It Is Not Made of Silicon

The new transistor runs 40% faster and uses less power.

Ice Age Humans in Ukraine Were Masterful Fire Benders, New Study Shows

Ice Age humans mastered fire with astonishing precision.

The "Bone Collector" Caterpillar Disguises Itself With the Bodies of Its Victims and Lives in Spider Webs

This insect doesn't play with its food. It just wears it.