A convergence analysis for a class of practical variance-reduction stochastic gradient MCMC

Published

Journal Article

© 2018, Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature. Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has been developed as a flexible family of scalable Bayesian sampling algorithms. However, there has been little theoretical analysis of the impact of minibatch size to the algorithm’s convergence rate. In this paper, we prove that at the beginning of an SG-MCMC algorithm, i.e., under limited computational budget/time, a larger minibatch size leads to a faster decrease of the mean squared error bound. The reason for this is due to the prominent noise in small minibatches when calculating stochastic gradients, motivating the necessity of variance reduction in SG-MCMC for practical use. By borrowing ideas from stochastic optimization, we propose a simple and practical variance-reduction technique for SG-MCMC, that is efficient in both computation and storage. More importantly, we develop the theory to prove that our algorithm induces a faster convergence rate than standard SG-MCMC. A number of large-scale experiments, ranging from Bayesian learning of logistic regression to deep neural networks, validate the theory and demonstrate the superiority of the proposed variance-reduction SG-MCMC framework.

Full Text

Duke Authors

Cited Authors

  • Chen, C; Wang, W; Zhang, Y; Su, Q; Carin, L

Published Date

  • January 1, 2019

Published In

Volume / Issue

  • 62 / 1

Electronic International Standard Serial Number (EISSN)

  • 1869-1919

International Standard Serial Number (ISSN)

  • 1674-733X

Digital Object Identifier (DOI)

  • 10.1007/s11432-018-9656-y

Citation Source

  • Scopus