TACO: Learning Task Decomposition via Temporal Alignment for Control
Kyriacos Shiarlis‚ Markus Wulfmeier‚ Sasha Salter‚ Shimon Whiteson and Ingmar Posner
Many advanced Learning from Demonstration (LfD) methods consider the decomposition of complex, real-world tasks into simpler sub-tasks. By reusing the corresponding sub-policies within and between tasks, they provide training data for each policy from different high-level tasks and compose them to perform novel ones. However, most existing approaches to modular LfD focus either on learning a single high-level task or depend on domain knowledge and temporal segmentation. By contrast, we propose a weakly supervised, domain-agnostic approach based on task sketches, which include only the sequence of sub-tasks performed in each demonstration. Our approach simultaneously aligns the sketches with the observed demonstrations and learns the required sub-policies, which improves generalisation in comparison to separate optimisation procedures. We evaluate the approach on multiple domains, including a simulated 3D robot arm control task using purely image-based observations. The approach performs commensurately with fully supervised approaches, while requiring significantly less annotation effort, and significantly outperforms methods which separate segmentation and imitation.