Skip to main content

Improving Reinforcement Learning Function Approximators via Neuroevolution

Shimon Whiteson

Abstract

Reinforcement learning problems are commonly tackled with temporal difference methods, which use dynamic programming and statistical sampling to estimate the long-term value of taking each action in each state. In most problems of real-world interest, learning this value function requires a function approximator, which represents the mapping from state-action pairs to values via a concise, parameterized function and uses supervised learning methods to set its parameters. Function approximators make it possible to use temporal difference methods on large problems but, in practice, the feasibility of doing so depends on the ability of the human designer to select an appropriate representation for the value function. My thesis presents a new approach to function approximation that automates some of these difficult design choices by coupling temporal difference methods with policy search methods such as evolutionary computation. It also presents a particular implementation which combines NEAT, a neuroevolutionary policy search method, and Q-learning, a popular temporal difference method, to yield a new method called NEAT+Q that automatically learns effective representations for neural network function approximators. Empirical results in a server job scheduling task demonstrate that NEAT+Q can outperform both NEAT and Q-learning with manually designed neural networks.

Book Title
AAAI 2005: Tenth Annual Doctoral Consortium
Month
July
Year
2005