Skip to main content

On the convergence of stochastic gradient MCMC algorithms with high-order integrators

Publication ,  Conference
Chen, C; Ding, N; Carin, L
Published in: Advances in Neural Information Processing Systems
January 1, 2015

Recent advances in Bayesian learning with large-scale data have witnessed emergence of stochastic gradient MCMC algorithms (SG-MCMC), such as stochastic gradient Langevin dynamics (SGLD), stochastic gradient Hamiltonian MCMC (SGHMC), and the stochastic gradient thermostat. While finite-time convergence properties of the SGLD with a 1st-order Euler integrator have recently been studied, corresponding theory for general SG-MCMCs has not been explored. In this paper we consider general SG-MCMCs with high-order integrators, and develop theory to analyze finite-time convergence properties and their asymptotic invariant measures. Our theoretical results show faster convergence rates and more accurate invariant measures for SG-MCMCs with higher-order integrators. For example, with the proposed efficient 2nd-order symmetric splitting integrator, the mean square error (MSE) of the posterior average for the SGHMC achieves an optimal convergence rate of L-4/5 at L iterations, compared to L-2/3 for the SGHMC and SGLD with 1st-order Euler integrators. Furthermore, convergence results of decreasing-step-size SG-MCMCs are also developed, with the same convergence rates as their fixed-step-size counterparts for a specific decreasing sequence. Experiments on both synthetic and real datasets verify our theory, and show advantages of the proposed method in two large-scale real applications.

Duke Scholars

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2015

Volume

2015-January

Start / End Page

2278 / 2286

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Chen, C., Ding, N., & Carin, L. (2015). On the convergence of stochastic gradient MCMC algorithms with high-order integrators. In Advances in Neural Information Processing Systems (Vol. 2015-January, pp. 2278–2286).
Chen, C., N. Ding, and L. Carin. “On the convergence of stochastic gradient MCMC algorithms with high-order integrators.” In Advances in Neural Information Processing Systems, 2015-January:2278–86, 2015.
Chen C, Ding N, Carin L. On the convergence of stochastic gradient MCMC algorithms with high-order integrators. In: Advances in Neural Information Processing Systems. 2015. p. 2278–86.
Chen, C., et al. “On the convergence of stochastic gradient MCMC algorithms with high-order integrators.” Advances in Neural Information Processing Systems, vol. 2015-January, 2015, pp. 2278–86.
Chen C, Ding N, Carin L. On the convergence of stochastic gradient MCMC algorithms with high-order integrators. Advances in Neural Information Processing Systems. 2015. p. 2278–2286.

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2015

Volume

2015-January

Start / End Page

2278 / 2286

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology