homehome Home chatchat Notifications


Google's MuZero chess AI reached superhuman performance without even knowing the rules

This gives it a surprisingly human-like intuition.

Mihai Andrei
October 8, 2021 @ 10:06 pm

share Share

Artificial Intelligence is becoming more and more intelligent — and more and more human-like.

Image credits: DeepMind

A lot of things have changed in modern chess compared to the past, but the most important change is the hegemony of computers. Take Magnus Carlsen — who, over the past decade, has been the uncontested world chess champion — he can’t really claim to be the best chess player, only the best human player.

Chess algorithms have long surpassed the human ability to play the game, for a very simple reason: they can memorize and calculate simple tasks far better than we can. But when AI’s started entering the scene, chess algo’s were also in for a revolution.

Traditionally, chess algorithms were trained in a very straightforward way: they were taught the rules of the game, fed a huge database of games, taught how to calculate, and off they went. But Google’s AlphaZero, for instance, takes a very different approach.AlphaZero has become, arguably, the best chess-playing entity in the world without studying a single human game. Instead, it was only taught the rules of the game and allowed to play against itself over and over. Intriguingly, this not only enabled it to achieve remarkable prowess, but also to develop a style of its own. Unlike traditional algorithms which play very concrete, grinding type of games, AlphaZero tends to play in a very conceptual and creative way (though the word ‘creative’ will surely annoy some readers). For instance, AlphaZero would often sacrifice a piece with no immediate reward in sight — it itself doesn’t necessarily calculate all the outcomes. Instead of playing moves that it can fully calculate to be better, which is what most algorithms do, AlphaZero plays moves that seem better.

It’s a surprisingly human way to approach the game, although many of AlphaZero’s moves seem distinctly inhuman.

Now, Google’s researchers have taken things to the next level with MuZero.

Unlike AlphaZero, MuZero wasn’t even told the rules of chess. It wasn’t allowed to make any illegal moves, but it was allowed to ponder them. This allows the algorithm to think in a more human way, considering threats and possibilities even when they might not be apparent or possible at a given time. For instance, the threat of losing an exposed piece might always be present in the back of a human player’s mind, even though it is not threatened at the moment.

Researchers say that this also allows MuZero to develop an internal intuition regarding the rules of the game.

The Elo evaluation of MuZero throughout training in chess, shogi, Go, and Atari. Image Credit: DeepMind

This led to remarkably good performances. Although the details that researchers presented are sparse, they claim that MuZero achieved the same performance as AlphaZero. But it gets even better.

Researchers didn’t only train the engine in chess, they also trained it in go, shogi, and 57 Atari games commonly used in this sort of study.

The most impressive results came from Go, a game that is unfathomably more complex than chess. MuZero slightly exceeded the performance of AlphaZero despite using less overall computation, which seems to indicate that MuZero has a deeper understanding of the game and the positions it was playing. Similar performances were reported in the Atari games, where MuZero outperformed state-of-the-art engines in 42 out of 57 games.

Of course, there is much more to this than just chess, Go, or PacMan. There are very concrete lessons that can be applied in artificial intelligence in a very practical setting.

“Many of the breakthroughs in artificial intelligence have been based on either high-performance planning,” wrote the researchers. “In this paper we have introduced a method that combines the benefits of both approaches. Our algorithm, MuZero, has both matched the superhuman performance of high-performance planning algorithms in their favored domains — logically complex board games such as chess and Go — and outperformed state-of-the-art model-free [reinforcement learning] algorithms in their favored domains — visually complex Atari games.”

The study can be read in a preprint on ArXiv.

share Share

Scientists Turn Timber Into SuperWood: 50% Stronger Than Steel and 90% More Environmentally Friendly

This isn’t your average timber.

A Provocative Theory by NASA Scientists Asks: What If We Weren't the First Advanced Civilization on Earth?

The Silurian Hypothesis asks whether signs of truly ancient past civilizations would even be recognisable today.

Scientists Created an STD Fungus That Kills Malaria-Carrying Mosquitoes After Sex

Researchers engineer a fungus that kills mosquitoes during mating, halting malaria in its tracks

From peasant fodder to posh fare: how snails and oysters became luxury foods

Oysters and escargot are recognised as luxury foods around the world – but they were once valued by the lower classes as cheap sources of protein.

Rare, black iceberg spotted off the coast of Labrador could be 100,000 years old

Not all icebergs are white.

We haven't been listening to female frog calls because the males just won't shut up

Only 1.4% of frog species have documented female calls — scientists are listening closer now

A Hawk in New Jersey Figured Out Traffic Signals and Used Them to Hunt

An urban raptor learns to hunt with help from traffic signals and a mental map.

A Team of Researchers Brought the World’s First Chatbot Back to Life After 60 Years

Long before Siri or ChatGPT, there was ELIZA: a simple yet revolutionary program from the 1960s.

Almost Half of Teens Say They’d Rather Grow Up Without the Internet

Teens are calling for stronger digital protections, not fewer freedoms.

China’s Ancient Star Chart Could Rewrite the History of Astronomy

Did the Chinese create the first star charts?