A Survey of Reinforcement Learning Informed by Natural Language
Jelena Luketina‚ Nantas Nardelli‚ Gregory Farquhar‚ Jakob Foerster‚ Jacob Andreas‚ Edward Grefenstette‚ Shimon Whiteson and Tim Rocktaschel
To be successful in real-world tasks, \glsRL needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems. We thus argue that the time is right to investigate a tight integration of natural language understanding into Reinforcement Learning (RL) in particular. We survey the state of the field, including work on instruction following, text games, and learning from textual domain knowledge. Finally, we call for the development of new environments as well as further investigation into the potential uses of recent Natural Language Processing (NLP) techniques for such tasks.