Abstract
A new Bayesian–inspired statistic for hypothesis testing is proposed which compares two posterior distributions; the observed posterior and the expected posterior under the null model. The Kullback–Leibler divergence between the two posterior distributions yields a test statistic which can be interpreted as a penalized log–Bayes factor with the penalty term converging to a constant as the sample size increases. Hence, asymptotically, the statistic behaves as a Bayes factor. Viewed as a penalized Bayes factor, this approach solves the long standing issue of using improper priors with the Bayes factor, since only posterior summaries are needed for the new statistic. Further motivation for the new statistic is a minimal move from the Bayes factor which requires no tuning nor splitting of data into training and inference, and can use improper priors. Critical regions for the test can be assessed using frequentist notions of Type I error.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.