An objective Bayes factor with improper priors
A new look at the use of improper priors in Bayes factors for model comparison is presented. As is well known, in such a case, the Bayes factor is only defined up to an arbitrary constant. Most current methods overcome the problem by using part of the sample to train the Bayes factor (Fractional Bayes Factor) or to transform the improper prior in to a proper distribution (Intrinsic Bayes Factors) and use the remainder of the sample for the model comparison. It is provided an alternative approach which relies on matching divergences between density functions so as to establish a value for the constant appearing in the Bayes factor. These are the Kullback–Leibler divergence and the Fisher information divergence; the latter being crucial as it does not depend on an unknown normalizing constant. Demonstrations of the performance of the proposed method are provided through numerous illustrations and comparisons, showing that the main advantage over existing ones is that it does not require any input from the experimenter; it is fully automated.
Duke Scholars
Altmetric Attention Stats
Dimensions Citation Stats
Published In
DOI
ISSN
Publication Date
Volume
Related Subject Headings
- Statistics & Probability
- 4905 Statistics
- 3802 Econometrics
- 1403 Econometrics
- 0802 Computation Theory and Mathematics
- 0104 Statistics
Citation
Published In
DOI
ISSN
Publication Date
Volume
Related Subject Headings
- Statistics & Probability
- 4905 Statistics
- 3802 Econometrics
- 1403 Econometrics
- 0802 Computation Theory and Mathematics
- 0104 Statistics