Skip to main content

Automatic Feature Selection for Model−Based Reinforcement Learning in Factored MDPs

Mark Kroon and Shimon Whiteson

Abstract

Feature selection is an important challenge in machine learning. Unfortunately, most methods for automating feature selection are designed for supervised learning tasks and are thus either inapplicable or impractical for reinforcement learning. This paper presents a new approach to feature selection specifically designed for the challenges of reinforcement learning. In our method, the agent learns a model, represented as a dynamic Bayesian network, of a factored Markov decision process, deduces a minimal feature set from this network, and efficiently computes a policy on this feature set using dynamic programming methods. Experiments in a stock-trading benchmark task demonstrate that this approach can reliably deduce minimal feature sets and that doing so can substantially improve performance and reduce the computational expense of planning.

Book Title
ICMLA 2009: Proceedings of the Eighth International Conference on Machine Learning and Applications
Month
December
Pages
324−330
Year
2009