Abstract

Uniformly most powerful Bayesian tests (UMPBT's) are an objective class of Bayesian hypothesis tests that can be considered the Bayesian counterpart of classical uniformly most powerful tests. Because the rejection regions of UMPBT's can be matched to the rejection regions of classical uniformly most powerful tests (UMPTs), UMPBT's provide a mechanism for calibrating Bayesian evidence thresholds, Bayes factors, classical significance levels and p-values. The purpose of this article is to expand the application of UMPBT's outside the class of exponential family models. Specifically, we introduce sufficient conditions for the existence of UMPBT's and propose a unified approach for their derivation. An important application of our methodology is the extension of UMPBT's to testing whether the non-centrality parameter of a chi-squared distribution is zero. The resulting tests have broad applicability, providing default alternative hypotheses to compute Bayes factors in, for example, Pearson's chi-squared test for goodness-of-fit, tests of independence in contingency tables, and likelihood ratio, score and Wald tests.

Highlights

  • Bayesian hypothesis tests are based on computing the posterior probabilities of competing hypotheses given data

  • This value is obtained by maximizing the approximate Bayes factor with respect to the parameter that controls the dispersion of the alternative hypothesis around the independence model

  • Uniformly most powerful Bayesian tests (UMPBT’s) provide a new class of objective Bayesian hypothesis tests. These tests facilitate a comparison between p-values from classical tests and Bayes factors from Bayesian tests

Read more

Summary

Introduction

Bayesian hypothesis tests are based on computing the posterior probabilities of competing hypotheses given data. From Bayes theorem, the posterior probability of each hypothesis is proportional to the product of its prior probability and the marginal likelihood of the data given that the hypothesis is true. In the case of two competing hypotheses, the posterior odds between hypotheses H0 and H1 can be written as. We assume throughout that the sampling density of the data x is defined with respect to a σ-finite measure and is described by the same parametric family of densities indexed by a parameter. Letting f (x | θ) denote the sampling density of the data x given the value of a parameter θ ∈ Θ, and πi(θ) the prior on θ given hypothesis i, the marginal density of the data under hypothesis i can be written as mi(x) = f (x | θ)πi(θ)dθ

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call