Abstract

This chapter considers the problem of Bayesian inference about the statistical model from which the data arose. It examines the asymptotic dependence of posterior model probabilities on the prior specifications and the data and proves that such problems of model choice exhibit more sensitivity to the prior than is the case for standard parametric inference. Where improper priors are used, the definition of Bayes factor is regarded as problematic. Many of the supposed difficulties can be avoided by specifying a single overall prior as a weighted sum of measures on the various model spaces and focusing attention directly on the typically proper posterior distribution across models that this implies. Furthermore, it discusses how to select the weights, both using subjective inputs and formal rules. It demonstrates the importance of using the Jeffreys prior within each model—a choice that goes a long way towards resolving many of the perceived problems connected with arbitrary scaling constants. The general theory is illustrated by constructing ‘reference posterior probabilities' for normal regression models and by analysis of an ESP experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call