Skip to main content

Sequential Monte Carlo Methods to Train Neural Network Models

De Freitas‚ Nando‚ M. A. Niranjan‚ A. H. Gee and A. Doucet

Abstract

We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent / sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving on-line, nonlinear, and nongaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the one-step-ahead probability density functions of the options prices.

Address
Cambridge‚ MA‚ USA
ISSN
0899−7667
Journal
Neural Computation
Number
4
Pages
955–993
Publisher
MIT Press
Volume
12
Year
2000