Skip to main content

Arena: A General Evaluation Platform and Building Toolkit for Multi−Agent Intelligence

Yuhang Song‚ Andrzej Wojcicki‚ Thomas Lukasiewicz‚ Jianyi Wang‚ Abi Aryan‚ Zhenghua Xu‚ Mai Xu‚ Zihan Ding and Lianlong Wu

Abstract

Learning agents that are not only capable of taking tests but are also innovating are becoming a hot topic in AI. One of the most promising paths towards this vision is multi-agent learning, where agents act as the environment for each other, and improving each agent means proposing new problems for others. However, existing evaluation platforms are either not compatible with multi-agent settings, or limited to a specific game. That is, there is not yet a general evaluation platform for research on multi-agent intelligence. To this end, we introduce Arena, a general evaluation platform for multi-agent intelligence with 35 games of diverse logic and representations. Furthermore, multi-agent intelligence is still at the stage where many problems remain unexplored. Therefore, we provide a building toolkit for researchers to easily invent and build novel multi-agent problems from the provided games set based on a GUI-configurable social tree and five basic multi-agent reward schemes. Finally, we provide Python implementations of five state-of-the-art deep multi-agent reinforcement learning baselines. Along with the baseline implementations, we release a set of 100 best agents/teams that we can train with different training schemes for each game, as the base for evaluating agents with population performance. As such, the research community can perform comparisons under a stable and uniform standard. Code for games, building toolkit, and baselines, as well as all corresponding tutorials, have been released online.

Book Title
Proceedings of the 34th National Conference on Artificial Intelligence‚ AAAI 2020‚ New York‚ New York‚ USA‚ February 7–12‚ 2020
Editor
Vincent Conitzer and Fei Sha
Month
February
Publisher
AAAI Press
Year
2020