A single artificial intelligence can beat human players in chess, Go, poker and other games that require a variety of strategies to win. The AI, called Student of Games, was created by Google DeepMind, which says it is a step towards an artificial general intelligence capable of carrying out any task with superhuman performance.
Martin Schmid, who worked at DeepMind on the AI but who is now at a start-up called EquiLibre Technologies, says that the Student of Games (SoG) model can trace its lineage back to two projects. One was DeepStack, the AI created by a team including Schmid at the University of Alberta in Canada and which was the first to beat human professional players at poker. The other was DeepMind’s AlphaZero, which has beaten the best human players at games like chess and Go.
The difference between those two models is that one focused on imperfect-knowledge games – those where players don’t know the state of all other players, such as their hands in poker – and one focused on perfect-knowledge games like chess, where both players can see the position of all pieces at all times. The two require fundamentally different approaches. DeepMind hired the whole DeepStack team with the aim of building a model that could generalise across both types of game, which led to the creation of SoG.
Advertisement
Schmid says that SoG begins as a “blueprint” for how to learn games, and then improve at them through practice. This starter model can then be set loose on different games and teach itself how to play against another version of itself, learning new strategies and gradually becoming more capable. But while DeepMind’s previous AlphaZero could adapt to perfect-knowledge games, SoG can adapt to both perfect and imperfect-knowledge games, making it far more generalisable.
The researchers tested SoG on chess, Go, Texas hold’em poker and a board game called Scotland Yard, as well as Leduc hold’em poker and a custom-made version of Scotland Yard with a different board, and found that it could beat several existing AI models and human players. Schmid says it should be able learn to play other games as well. “There’s many games that you can just throw at it and it would be really, really good at it.”
Sign up to our The Daily newsletter
The latest science news delivered to your inbox, every day.
This wide-ranging ability comes at a slight cost in performance compared with DeepMind’s more specialised algorithms, but SoG can nonetheless easily beat even the best human players at most games it learns. Schmid says that SoG learns to play against itself in order to improve at games, but also to explore the range of possible scenarios from the present state of a game – even if it is playing an imperfect-knowledge one.
“When you’re in a game like poker, it’s so much harder to figure out; how the hell am I going to search [for the best strategic next move in a game] if I don’t know what cards the opponent holds?” says Schmid. “So there was some some set of ideas coming from AlphaZero, and some set of ideas coming from DeepStack into this big big mix of ideas, which is Student of Games.”
Michael Rovatsos at the University of Edinburgh, UK, who wasn’t involved in the research, says that while impressive, there is still a very long way to go before an AI can be thought of as generally intelligent, because games are settings in which all rules and behaviours are clearly defined, unlike the real world.
“The important thing to highlight here is that it’s a controlled, self-contained, artificial environment where what everything means, and what the outcome of every action is, is crystal clear,” he says. “The problem is a toy problem because, while it may be very complicated, it’s not real.”
Journal reference
Science Advances DOI: 10.1126/sciadv.adg3256
Topics: