London A.I. company DeepMind has published new details about an algorithm that can learn to play games at superhuman levels—even when it doesn’t start out knowing the rules of the game, an achievement that the company says is a big step toward creating A.I. systems that can deal with complicated and uncertain real-world situations.
The algorithm, which DeepMind calls MuZero, has learned to play chess, Go, and the Japanese strategy game Shogi, as well as a host of classic Atari video games at superhuman levels. Previously, DeepMind had created algorithms that could master each of these games, but not a single algorithm that could handle both the board games and the video games. Also, DeepMind’s previous algorithm for mastering the board games, AlphaZero, started out knowing the rules, while MuZero does not.
AlphaZero was itself a more general variant of AlphaGo, the Go-playing algorithm DeepMind famously demonstrated in 2016, defeating Lee Sedol, at the time the world’s top-ranked Go player, in a match in South Korea.
DeepMind, which is owned by Google parent Alphabet, first unveiled MuZero in 2019, but on Wednesday it published more information about the algorithm in a peer-reviewed paper in the prestigious scientific journal Nature.
MuZero works by constructing a model of how it thinks the game it is playing works and then using that model to plan the most beneficial actions in the game. It learns to improve both the model and its planned actions by playing the game over and over again. In the case of the two player games, MuZero learns by playing against previous versions of itself.
More important for real-world situations, the model that the algorithm creates of the rules of the game doesn’t have to be 100% accurate, or even complete. It just has to be useful enough that MuZero is able to make some progress in the game from which it can begin to improve.
“We are basically saying to the system, just go and make up your own internal fiction about how the world works,” David Silver, the DeepMind computer scientist who led the team that built MuZero, told Fortune. “As long as this internal fiction leads to something that actually matches reality when you come to use it, then we’re fine with it.”
In the Nature paper, DeepMind showed the importance of planning to the algorithm’s capability: The more time MuZero was given to plan, the better it performed. MuZero was many times more capable at Go—about the difference between a strong amateur and a strong professional player—when given 50 seconds to consider a move, compared with when it was given just one-tenth of a second.
This difference held even in the Atari games, where quick reaction times are often thought to matter more than strategic thinking. Here, more time allowed MuZero to game out what might happen in more possible scenarios. The researchers noted that the system achieved very good performance in a game like Ms. Pac-Man, even when it was only given enough time to explore six or seven possible moves, which was far too few to gain a complete understanding of all the possibilities.
While DeepMind has not tested MuZero on multiplayer games where hidden information plays an important role—such as poker or bridge—Silver said he suspects MuZero might be able to learn to play these games too, and that the company plans to explore this further. A.I. researchers from Carnegie Mellon University and Facebook have previously built A.I. systems capable of beating champion poker players. Bridge, which relies in part on communication, remains a challenge.
Silver said DeepMind is considering several real-world uses for MuZero. One of the most promising so far, Silver said, is video compression, where there are many different ways to compress a video signal, but no clear rules about which one is best for different kinds of video. He said that initial experiments with MuZero-like algorithms had shown it might be possible to achieve a 5% reduction in bandwidth over the best previous compression methods. Silver also said MuZero might be useful for building more capable robots and digital assistants as well as extending DeepMind’s recent breakthrough in predicting the structure of proteins, research that has so far not relied on the techniques the company pioneered in its games research.
Others, however, are already taking MuZero in very different directions. Last week, the U.S. Air Force revealed that it had used information about MuZero that DeepMind had made freely available to the public last year to help create an A.I. system that could autonomously control the radar of a U-2 spy plane. The Air Force tested the A.I. system, which it calls ARTUMu, on a U-2 Dragon Lady spy plane during a simulated missile strike in a training mission on Dec. 14. Stop Killer Robots, a campaign led by computer scientists, arms control experts and human rights activists, said the Air Force research was a dangerous step toward creating lethal autonomous weapons.
DeepMind told Fortune it had no role in the Air Force research and was unaware of it until seeing news reports about the training mission last week. DeepMind has previously pledged to avoid work on offensive weapons capabilities or A.I. that can identify and track targets and deploy weapons against them without a human making the final decision about striking those particular targets.
More must-read tech coverage from Fortune:
- How hackers could undermine a successful vaccine rollout
- Why investors jumped on board the SPAC “gravy train”
- GitHub CEO: We’re nuking all tracking “cookies,” and you should too
- Innovation just isn’t happening over Zoom
- Upstart CEO talks major IPO “pop,” A.I. racial bias, and Google