Skip to main content

A Large−Scale Study of Agents Learning from Human Reward

Guangliang Li‚ Hayley Hung and Shimon Whiteson

Abstract

The TAMER framework, which provides a way for agents to learn to solve tasks using human-generated rewards, has been examined in several small-scale studies, each with a few dozen subjects. In this paper, we present the results of the first large-scale study of TAMER, which was performed at the NEMO science museum in Amsterdam and involved 561 subjects. Our results show for the first time that an agent using TAMER can successfully learn to play Infinite Mario, a challenging reinforcement-learning benchmark problem based on the popular video game, given feedback from both adult (N=209) and child (N=352) trainers. In addition, our study supports prior studies demonstrating the importance of bidirectional feedback and competitive elements in the training interface. Finally, our results also shed light on the potential for using trainers' facial expressions as a reward signal, as well as the role of age and gender in trainer behavior and agent performance.

Book Title
AAMAS 2015: Proceedings of the Fourteenth International Joint Conference on Autonomous Agents and Multi−Agent Systems
Month
May
Note
Extended Abstract.
Pages
1771−1772
Year
2015