Skip to main content

Diversity−Driven Extensible Hierarchical Reinforcement Learning

Yuhang Song‚ Jianyi Wang‚ Thomas Lukasiewicz‚ Zhenghua Xu and Mai Xu

Abstract

Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversitydriven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with nine baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.

Book Title
Proceedings of the 33rd National Conference on Artificial Intelligence‚ AAAI 2019‚ Honolulu‚ Hawaii‚ USA‚ January 27 − February 1‚ 2019
Editor
Pascal Van Hentenryck and Zhi−Hua Zhou
Month
January
Pages
4992–4999
Publisher
AAAI Press
Year
2019