Skip to main content

A tutorial on stochastic approximation algorithms for training Restricted Boltzmann Machines and Deep Belief Nets

Kevin Swersky‚ Bo Chen‚ Ben Marlin and Nando de Freitas

Abstract

In this study, we provide a direct comparison of the Stochastic Maximum Likelihood algorithm and Contrastive Divergence for training Restricted Boltzmann Machines using the MNIST data set. We demonstrate that Stochastic Maximum Likelihood is superior when using the Restricted Boltzmann Machine as a classifier, and that the algorithm can be greatly improved using the technique of iterate averaging from the field of stochastic approximation. We further show that training with optimal parameters for classification does not necessarily lead to optimal results when Restricted Boltzmann Machines are stacked to form a Deep Belief Network. In our experiments we observe that fine tuning a Deep Belief Network significantly changes the distribution of the latent data, even though the parameter changes are negligible.

Book Title
Information Theory and Applications Workshop (ITA)
Pages
1−10
Year
2010