Skip to main content

Inverse Reinforcement Learning from Failure

Kyriacos Shiarlis‚ João Messias and Shimon Whiteson

Abstract

Inverse reinforcement learning (IRL) allows autonomous agents to learn to solve complex tasks from successful demonstrations. However, in many settings, e.g., when a human learns the task by trial and error, failed demonstrations are also readily available. In addition, in some tasks, purposely generating failed demonstrations may be easier than generating successful ones. Since existing IRL methods cannot make use of failed demonstrations, in this paper we propose inverse reinforcement learning from failure (IRLF) which exploits both successful and failed demonstrations. Starting from the state-of-the-art maximum causal entropy IRL method, we propose a new constrained optimisation formulation that accommodates both types of demonstrations while remaining convex. We then derive update rules for learning reward functions and policies. Experiments on both simulated and real-robot data demonstrate that IRLF converges faster and generalises better than maximum causal entropy IRL, especially when few successful demonstrations are available.

Book Title
AAMAS 2016: Proceedings of the Fifteenth International Joint Conference on Autonomous Agents and Multi−Agent Systems
Month
May
Note
Nominated for Best Student Paper.
Pages
1060−1068
Year
2016