Skip to main content

Optimal approximating Markov chains for Bayesian inference

Publication ,  Journal Article
Johndrow, JE; Mattingly, JC; Mukherjee, S; Dunson, D
August 13, 2015

The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It is common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to computational optimality in these approximating Markov Chains, or when such approximations are justified relative to obtaining shorter paths from the exact kernel. We give simple, sharp bounds for uniform approximations of uniformly mixing Markov chains. We then suggest a notion of optimality that incorporates computation time and approximation error, and use our bounds to make generalizations about properties of good approximations in the uniformly mixing setting. The relevance of these properties is demonstrated in applications to a minibatching-based approximate MCMC algorithm for large $n$ logistic regression and low-rank approximations for Gaussian processes.

Duke Scholars

Publication Date

August 13, 2015
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Johndrow, J. E., Mattingly, J. C., Mukherjee, S., & Dunson, D. (2015). Optimal approximating Markov chains for Bayesian inference.
Johndrow, James E., Jonathan C. Mattingly, Sayan Mukherjee, and David Dunson. “Optimal approximating Markov chains for Bayesian inference,” August 13, 2015.
Johndrow JE, Mattingly JC, Mukherjee S, Dunson D. Optimal approximating Markov chains for Bayesian inference. 2015 Aug 13;
Johndrow, James E., et al. Optimal approximating Markov chains for Bayesian inference. Aug. 2015.
Johndrow JE, Mattingly JC, Mukherjee S, Dunson D. Optimal approximating Markov chains for Bayesian inference. 2015 Aug 13;

Publication Date

August 13, 2015