Bridging the gap between stochastic gradient MCMC and stochastic optimization

Published

Conference Paper

© 2016 PMLR. All rights reserved. Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian analogs to popular stochastic optimization methods; however, this connection is not well studied. We explore this relationship by applying simulated annealing to an SG-MCMC algorithm. Furthermore, we extend recent SG-MCMC methods with two key components: i) adaptive preconditioners (as in ADAgrad or RMSprop), and ii) adaptive element-wise momentum weights. The zero-temperature limit gives a novel stochastic optimization method with adaptive element-wise momentum weights, while conventional optimization methods only have a shared, static momentum weight. Under certain assumptions, our theoretical analysis suggests the proposed simulated annealing approach converges close to the global optima. Experiments on several deep neural network models show state-of-the-art results compared to related stochastic optimization algorithms.

Duke Authors

Cited Authors

  • Chen, C; Carlson, D; Gan, Z; Li, C; Carin, L

Published Date

  • January 1, 2016

Published In

  • Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, Aistats 2016

Start / End Page

  • 1051 - 1060

Citation Source

  • Scopus