Skip to main content

Efficient Abstraction Selection in Reinforcement Learning

Harm van Seijen‚ Shimon Whiteson and Leon Kester

Abstract

This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of candidate abstractions, each build up from a different combination of state components.

Book Title
SARA 2013: Proceedings of the Tenth Symposium on Abstraction‚ Reformulation‚ and Approximation
Month
July
Note
Extended Abstract.
Pages
123−127
Year
2013