Skip to main content

Concurrent Layered Learning

Shimon Whiteson and Peter Stone

Abstract

Hierarchies are powerful tools for decomposing complex control tasks into manageable subtasks. Several hierarchical approaches have been proposed for creating agents that can execute these tasks. Layered learning is such a hierarchical paradigm that relies on learning the various subtasks necessary for achieving the complete high-level goal. Layered learning prescribes training low-level behaviors (those closer to the environmental inputs) prior to high-level behaviors. In past implementations these lower-level behaviors were always frozen before advancing to the next layer. In this paper, we hypothesize that there are situations where layered learning would work better were the lower layers allowed to keep learning concurrently with the training of subsequent layers, an approach we call concurrent layered learning. We identify a situation where concurrent layered learning is beneficial and present detailed empirical results verifying our hypothesis. In particular, we use neuro-evolution to concurrently learn two layers of a layered learning approach to a simulated robotic soccer keepaway task. The main contribution of this paper is evidence that there exist situations where concurrent layered learning outperforms traditional layered learning. Thus, we establish that, when using layered learning, the concurrent training of layers can be an effective option.

Book Title
AAMAS 2003: Proceedings of the Second International Joint Conference on Autonomous Agents and Multi−Agent Systems
Month
July
Pages
193−200
Year
2003