Robust Bayesian inference via coarsening.
Journal Article (Journal Article)
The standard approach to Bayesian inference is based on the assumption that the distribution of the data belongs to the chosen model class. However, even a small violation of this assumption can have a large impact on the outcome of a Bayesian procedure. We introduce a novel approach to Bayesian inference that improves robustness to small departures from the model: rather than conditioning on the event that the observed data are generated by the model, one conditions on the event that the model generates data close to the observed data, in a distributional sense. When closeness is defined in terms of relative entropy, the resulting "coarsened" posterior can be approximated by simply tempering the likelihood-that is, by raising the likelihood to a fractional power-thus, inference can usually be implemented via standard algorithms, and one can even obtain analytical solutions when using conjugate priors. Some theoretical properties are derived, and we illustrate the approach with real and simulated data using mixture models and autoregressive models of unknown order.
Full Text
Duke Authors
Cited Authors
- Miller, JW; Dunson, DB
Published Date
- January 2019
Published In
Volume / Issue
- 114 / 527
Start / End Page
- 1113 - 1125
PubMed ID
- 31942084
Pubmed Central ID
- PMC6961963
Electronic International Standard Serial Number (EISSN)
- 1537-274X
International Standard Serial Number (ISSN)
- 0162-1459
Digital Object Identifier (DOI)
- 10.1080/01621459.2018.1469995
Language
- eng