Abstract

Abstract Commonly used noninformative priors for generalized linear models (GLMs), such as the uniform prior or Jeffreys's prior, are generally improper, and thus are defined only up to arbitrary normalizing constants. Therefore, Bayes factors and posterior model probabilities are not well defined under these types of noninformative priors, making Bayesian hypothesis testing and model selection impossible. To overcome these difficulties, we derive the intrinsic Bayes factor (IBF) of Berger and Pericchi (1996a, In: Bayesian Analysis V. Bernardo, J.M. et al. (Eds.), Oxford University Press, Oxford, pp. 25–44; J. Amer. Statist. Assoc. 91, 109–122) and the fractional Bayes factor (FBF) of O'Hagan (1995, J. Roy Statist. Soc. Ser. B 57, 99–138) for the class of GLMs. Using the fact that the posterior distribution of the regression coefficients in a GLM is approximately normally distributed, we derive an asymptotic form for these default Bayes factors. We derive an explicit form of the IBF for the binary, Poisson, and exponential regression models. In addition, we derive criteria for obtaining the minimal training sample for this class of models. We demonstrate our results with two real datasets and a simulated dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call