Skip to main content

Exploration in Approximate Hyper−State Space for Meta Reinforcement Learning

Luisa M Zintgraf‚ Leo Feng‚ Cong Lu‚ Maximilian Igl‚ Kristian Hartikainen‚ Katja Hofmann and Shimon Whiteson

Abstract

To rapidly learn a new task, it is often essential for agents to explore efficiently - especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent’s task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.

Book Title
Proceedings of the 38th International Conference on Machine Learning
Editor
Meila‚ Marina and Zhang‚ Tong
Month
18–24 Jul
Pages
12991–13001
Publisher
PMLR
Series
Proceedings of Machine Learning Research
Volume
139
Year
2021