On computation using Gibbs sampling for multilevel models
Multilevel models incorporating random effects at the various levels are enjoying increased popularity. An implicit problem with such models is identifiability. From a Bayesian perspective, formal identifiability is not an issue. Rather, when implementing iterative simulation-based model fitting, a poorly behaved Gibbs sampler frequently arises. The objective of this paper is to shed light on two computational issues in this regard. The first concerns autocorrelation in the sequence of iterates of the Markov chain. For estimable functions we clarify when, after convergence, autocorrelation will drop off to zero rapidly, enabling high effective sample size. The second concerns immediate convergence, i.e., when, at an arbitrary iteration, the simulated value of a variable is in fact an observation from the posterior distribution of the variable. Again, for estimable functions, we clarify when the chain will produce at each iteration a sample drawn essentially from the true posterior of the function. We provide both analytical and computational support for our conclusions, including exemplification for three multilevel models having normal, Poisson, and binary responses, respectively.
Duke Scholars
Published In
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Statistics & Probability
- 0801 Artificial Intelligence and Image Processing
- 0199 Other Mathematical Sciences
- 0104 Statistics
Citation
Published In
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Statistics & Probability
- 0801 Artificial Intelligence and Image Processing
- 0199 Other Mathematical Sciences
- 0104 Statistics