A convergence analysis for a class of practical variance-reduction stochastic gradient MCMC
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has been developed as a flexible family of scalable Bayesian sampling algorithms. However, there has been little theoretical analysis of the impact of minibatch size to the algorithm’s convergence rate. In this paper, we prove that at the beginning of an SG-MCMC algorithm, i.e., under limited computational budget/time, a larger minibatch size leads to a faster decrease of the mean squared error bound. The reason for this is due to the prominent noise in small minibatches when calculating stochastic gradients, motivating the necessity of variance reduction in SG-MCMC for practical use. By borrowing ideas from stochastic optimization, we propose a simple and practical variance-reduction technique for SG-MCMC, that is efficient in both computation and storage. More importantly, we develop the theory to prove that our algorithm induces a faster convergence rate than standard SG-MCMC. A number of large-scale experiments, ranging from Bayesian learning of logistic regression to deep neural networks, validate the theory and demonstrate the superiority of the proposed variance-reduction SG-MCMC framework.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Related Subject Headings
- Software Engineering
- 4009 Electronics, sensors and digital hardware
- 4007 Control engineering, mechatronics and robotics
- 4006 Communications engineering
- 0899 Other Information and Computing Sciences
- 0806 Information Systems
- 0804 Data Format
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Related Subject Headings
- Software Engineering
- 4009 Electronics, sensors and digital hardware
- 4007 Control engineering, mechatronics and robotics
- 4006 Communications engineering
- 0899 Other Information and Computing Sciences
- 0806 Information Systems
- 0804 Data Format