Skip to main content

Game Playing Meets Game Theory: Strategic Learning from Simulated Play

Michael Wellman ( University of Michigan )

Recent breakthroughs in AI game-playing — AlphaGo (Go), AlphaZero (Chess, Shogi +), AlphaStar (StarCraft II), Libratus and DeepStack (Poker) — have demonstrated superhuman performance in a range of recreational strategy games. Extending beyond artificial domains presents several challenges, but the basic idea of learning from simulated play employed in most of these systems is broadly applicable to any domain that can be accurately simulated. This thread of work naturally dovetails with methods developed in the Strategic Reasoning Group at Michigan for reasoning about simulation-based games. I will recap some of this work, with emphasis on how new advances in deep reinforcement learning can contribute to a major broadening of the scope of game-theoretic reasoning for complex multiagent domains.

Speaker bio

Photo and bio available at:

https://strategicreasoning.org/michael-p-wellman/

 

 

Share this: