Skip to main content

Social Interaction for Efficient Agent Learning from Human Reward

Guangliang Li‚ Shimon Whiteson‚ W. Bradley Knox and Hayley Hung


Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent's socio-competitive feedback on the human trainer's training behavior and the agent's learning. The results of our user study with 85 participants suggest that the agent's passive socio-competitive feedback — showing performance and score of agents trained by trainers in a leaderboard — substantially increases the engagement of the participants in the game task and improves the agents' performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active — sending the trainer her agent's performance relative to others — further induces more participants to train agents longer and improves the agent's learning. %though the effect is minor. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer's perspective and the agent's additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.

Autonomous Agents and Multi−Agent Systems
To appear.