Skip to main content

A Tutorial on Bayesian Optimization of Expensive Cost Functions‚ with Application to Active User Modeling and Hierarchical Reinforcement Learning

Eric Brochu‚ Vlad M Cora and Nando de Freitas

Abstract

We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments—active user modelling with preferences, and hierarchical reinforcement learning—and a discussion of the pros and cons of Bayesian optimization based on our experiences.

Institution
University of British Columbia‚ Department of Computer Science
Number
UBC TR−2009−023 and arXiv:1012.2599
Year
2009