advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

DeepMind’s AlphaStar AI wiped the floor with its human opponents

Developers of artificial intelligence have made a habit of training agents to be good at video games and the latest example of that is DeepMind.

The Alphabet owned firm’s AlphaStar program just recently beat out professional Starcraft II players.

In its first series against Dario “TLO” Wünsch, AlphaStar annihilated the competition. The agent won all five games against the Zerg player. For interest sakes the agent was playing as Protoss.

Following that match DeepMind began training its agent in preparation for a match against one of the best Protoss players in the world, Grzegorz “MaNa” Komincz.

Despite MaNa being one of the best, the player was bested five times. However in a final game the human beat out the machine.

While we’ve seen AI best humans in a number of games including Go, Dota 2 and Mario, Starcraft presents unique and interesting circumstances for an AI to master.

“The need to balance short and long-term goals and adapt to unexpected situations, poses a huge challenge for systems that have often tended to be brittle and inflexible,” Deepmind wrote in a blog post.

The organisation points to five major AI research challenges its agent had to overcome namely:

  • Game theory – There is no single best strategy to winning Starcraft and as such the AI agent needs to continually be trained to expand its knowledge of the game
  • Imperfect information – Unlike games such as Go, players cannot see everything other players are doing. The agent has to be trained to actively scout for information.
  • Long term planning – As games of Starcraft can play out over long periods of time actions taken at the beginning can only take effect much later in the game.
  • Real time action – Once again, unlike turn-based games such as Go Starcraft requires players take action in real time.
  • Action space – A game of Starcraft requires hundreds of actions across hundreds of units and buildings often in a hierarchical order.

Training DeepMind’s AlphaStar then was a mammoth task requiring the training of a deep neural network. For those that are interested in how DeepMind trained the bot head over to the blog post we mentioned earlier.

There’s also a wonderful infographic over at that link that shows off the agents that squared off against TLO and MaNa and how much the play styles differ.

You can watch all the games AlphaStar played against TLO and MaNa in the video below.

advertisement

About Author

advertisement

Related News

advertisement