Skip to main content

Mega−Reward: Achieving Human−Level Play without Extrinsic Rewards

Yuhang Song‚ Jianyi Wang‚ Thomas Lukasiewicz‚ Zhenghua Xu‚ Shangtong Zhang‚ Andrzej Wojcicki and Mai Xu

Abstract

Intrinsic rewards were introduced to simulate how human intelligence works; they are usually evaluated by intrinsically-motivated play, i.e., playing games without extrinsic rewards but evaluated with extrinsic rewards. However, none of the existing intrinsic reward approaches can achieve human-level performance under this very challenging setting of intrinsically-motivated play. In this work, we propose a novel megalomania-driven intrinsic reward (called mega-reward), which, to our knowledge, is the first approach that achieves human-level performance in intrinsically-motivated play. Intuitively, mega-reward comes from the observation that infants' intelligence develops when they try to gain more control on entities in an environment; therefore, mega-reward aims to maximize the control capabilities of agents on given entities in a given environment. To formalize mega-reward, a relational transition model is proposed to bridge the gaps between direct and latent control. Experimental studies show that mega-reward (i) can greatly outperform all state-of-the-art intrinsic reward approaches, (ii) generally achieves the same level of performance as Ex-PPO and professional human-level scores, and (iii) has also a superior performance when it is incorporated with extrinsic rewards.

Book Title
Proceedings of the 34th National Conference on Artificial Intelligence‚ AAAI 2020‚ New York‚ New York‚ USA‚ February 7–12‚ 2020
Editor
Vincent Conitzer and Fei Sha
Month
February
Publisher
AAAI Press
Year
2020