Abstract

ABSTRACTA Bayes factor between two models can be greatly affected by the prior distributions on the model parameters. When prior information is weak, very dispersed proper prior distributions are known to create a problem for the Bayes factor when competing models differ in dimension, and it is of even greater concern when one of the models is of infinite dimension. Therefore, we propose an innovative method which uses training samples to calibrate the prior distributions so that they achieve a reasonable level of ‘information’. Then the calibrated Bayes factor can be computed over the remaining data. This method makes no assumption on model forms (parametric or nonparametric) and can be used with both proper and improper priors. We illustrate, through simulation studies and a real data example, that the calibrated Bayes factor yields robust and reliable model preferences under various situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call