Abstract

A new look at the use of improper priors in Bayes factors for model comparison is presented. As is well known, in such a case, the Bayes factor is only defined up to an arbitrary constant. Most current methods overcome the problem by using part of the sample to train the Bayes factor (Fractional Bayes Factor) or to transform the improper prior in to a proper distribution (Intrinsic Bayes Factors) and use the remainder of the sample for the model comparison. It is provided an alternative approach which relies on matching divergences between density functions so as to establish a value for the constant appearing in the Bayes factor. These are the Kullback–Leibler divergence and the Fisher information divergence; the latter being crucial as it does not depend on an unknown normalizing constant. Demonstrations of the performance of the proposed method are provided through numerous illustrations and comparisons, showing that the main advantage over existing ones is that it does not require any input from the experimenter; it is fully automated.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.