ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Future

AI is beating almost all of mankind at Starcraft

Let's teach AI to beat us at strategy war games -- that sounds like a neat idea.

Mihai AndreibyMihai Andrei
October 31, 2019
in Future, News, Research, Technology
A A
Share on FacebookShare on TwitterSubmit to Reddit

A new algorithm called AlphaStar is beating all but the very best human players at Starcraft. This is not only a remarkable achievement in itself, but it could teach AIs how to solve complex problems in other applications.

A typical Protoss-Zerg combat. Credits: DeepMind.

The foray of AIs in strategy games is not exactly a new thing. Google’s ‘Alpha’ class of AIs, in particular, has taken the world by storm with their prowess. They’re revolutionizing chess and Go — once thought to be insurmountable for an algorithm. Researchers have also set their eyes on other games (DOTA and Poker for instance), with promising but limited results. The sheer complexity of the game, mixed with the fact that you don’t have all the information available to you (as opposed to Go and chess, where you see the entire board freely), raised serious challenges for AIs.

But fret not — our algorithm friends are slowly overcoming them. A new Alpha AI, aptly called AlphaStar, has now reached a remarkable level of prowess, ranking in the top 98.5% of all Starcraft II players.

Starcraft is one of the most popular computer strategy games of all time. Its sequel, Starcraft II, features a very similar scenario. The players choose one of three races: the technologically advanced humans, the Protoss (masters of psionic energy), or the Zerg (quickly-evolving biological monsters). They then mine resources, build structures, an army, and try to destroy the opponent(s).

There are multiple viable strategies in Starcraft, and there’s no simple way to overcome your opponent. The infamous ‘fog of war’ also hides your opponent’s movements, so you also have to be prepared for whatever they are doing.

AlphaStar managed to reach Grandmaster Tier — a category reserved for only the best Starcraft players.

Credits: Deep Mind.

Having an AI that is this good at such a complex game would have been unimaginable a decade ago. The progress is so remarkable that one of the researchers at DeepMind, the company training and running these AIs called it a ‘defining moment’ in his career.

RelatedPosts

Did life thrive beyond Earth? AI technique may provide some answers
Massive analysis of gamers’ habits reveals how to best reach excellence in any skill
AI companies plan to use nuclear energy for their energy-hungry data centers. But should they?
Organic transistors bring us closer to brain-mimicking AI

“This is a dream come true,” said Oriol Vinyals, lead, AlphaStar project, DeepMind. “I was a pretty serious StarCraft player 20 years ago, and I’ve long been fascinated by the complexity of the game. AlphaStar achieved Grandmaster level solely with a neural network and general-purpose learning algorithms – which was unimaginable 10 years ago when I was researching StarCraft AI using rules-based systems.

AlphaStar advances our understanding of AI in several key ways: multi-agent training in a competitive league can lead to great performance in highly complex environments, and imitation learning alone can achieve better results than we’d previously supposed.

I’m excited to begin exploring ways we can apply these techniques to real-world challenges, such as helping improve the robustness of AI systems. I’m incredibly proud of the team for all their hard work to get us to this point. This has been the defining moment of my career so far.”

The AI didn’t play with ‘AI cheats’ — it had to face the same constraints as human players:

  • it could only see the map through a camera as a human would;
  • it had to play through a server, not directly;
  • it had an built-in reaction time;
  • it had to select a race and play with it.

Even with all these, the AI did remarkably well.

Every single combat has multiple aspects of strategy involved. Credits: DeepMind.

At every given moment, a Starcraft player (or algorithm) has to choose from up to 10^26 possible actions, all of which have potentially significant consequences. Therefore, researchers took a different approach than with Go or chess. In these ancient games, the AIs learned by playing millions and millions of games, practicing and learning alone. In the Starcraft algorithm, however, some initial information had to be input into the framework.

This is called imitation learning — the AI was basically taught how to play the game. By doing this and combining it with neural network architectures, the AI was already better than most players. With more supervised learning, it was able to surpass all but the very best players in the world. This enabled it to learn from existing strategies, but also develop its own ideas.

“StarCraft has been a grand challenge for AI researchers for over 15 years, so it’s hugely exciting to see this work recognised in Nature. These impressive results mark an important step forward in our mission to create intelligent systems that will accelerate scientific discovery,” said Demis Hassabis, co-founder and CEO, DeepMind.

Professional Starcraft players were also impressed and thrilled to see the AI play out its game. As is the case with previous iterations of Alpha AIs, the algorithm came up with new and innovative tactics.

“AlphaStar is an intriguing and unorthodox player – one with the reflexes and speed of the best pros but strategies and a style that are entirely its own,” said Diego “Kelazhur” Schwimer, professional StarCraft II player for Panda Global. “The way AlphaStar was trained, with agents competing against each other in a league, has resulted in gameplay that’s unimaginably unusual; it really makes you question how much of StarCraft’s diverse possibilities pro players have really explored. Though some of AlphaStar’s strategies may at first seem strange, I can’t help but wonder if combining all the different play styles it demonstrated could actually be the best way to play the game.”

It’s an impressive milestone. It’s also one that could get us to think whether teaching AIs how to beat us in strategy war games is a good idea or not. But for now, at least, there’s no need to worry. AIs are very limited in their scope. They can get very good, but strictly at the task they are trained to do — they have no way of applying what they’ve learned in the computer game setting to a real-life war scenario, for instance.

Instead, this application could help researchers learn how to design better AIs for dealing with simple real-world scenarios, like maneuvering a robotic arm or operating efficient heating for smart homes.

The research was published in Nature.

Tags: AIStarcraft

ShareTweetShare
Mihai Andrei

Mihai Andrei

Dr. Andrei Mihai is a geophysicist and founder of ZME Science. He has a Ph.D. in geophysics and archaeology and has completed courses from prestigious universities (with programs ranging from climate and astronomy to chemistry and geology). He is passionate about making research more accessible to everyone and communicating news and features to a broad audience.

Related Posts

Art

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

byTibi Puiu
2 days ago
Future

The Real Singularity: AI Memes Are Now Funnier, On Average, Than Human Ones

byRupendra Brahambhatt
2 days ago
News

Big Tech Said It Was Impossible to Create an AI Based on Ethically Sourced Data. These Researchers Proved Them Wrong

byMihai Andrei
3 days ago
Future

Everyone Thought ChatGPT Used 10 Times More Energy Than Google. Turns Out That’s Not True

byTibi Puiu
4 days ago

Recent news

This Plastic Dissolves in Seawater and Leaves Behind Zero Microplastics

June 14, 2025

Women Rate Women’s Looks Higher Than Even Men

June 14, 2025

AI-Based Method Restores Priceless Renaissance Art in Under 4 Hours Rather Than Months

June 13, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.