Exploration in Approximate Hyper−State Space for Meta Reinforcement Learning
Luisa M Zintgraf‚ Leo Feng‚ Cong Lu‚ Maximilian Igl‚ Kristian Hartikainen‚ Katja Hofmann and Shimon Whiteson
To rapidly learn a new task, it is often essential for agents to explore efficiently - especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent’s task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.