Skip to main content

Bayesian Learning via Stochastic Gradient Langevin Dynamics

Yee Whye Teh ( Statistics Department, University of Oxford )

The Bayesian approach to machine learning is a theoretically well-motivated framework to learning from data. It provides a coherent framework to reasoning about uncertainties, and an inbuilt protection against overfitting. However, computations in the framework can be expensive, and most approaches to Bayesian computations do not scale well to the big data setting. In this talk we propose a new computational approach for Bayesian learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. We apply the method to logistic regression and latent Dirichlet allocation, showing state-of-the-art performance.

Joint work with Max Welling and Sam Patterson.

Speaker bio

Yee Whye Teh is Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford. Prior to this appointment he was lecturer then reader at the Gatsby Computational Neuroscience Unit, UCL. Yee Whye is interested in the interface between statistics and computation, and machine learning. His recent focus has been on Bayesian nonparametric modelling and Bayesian computations.

 

 

Share this: