Abstract

Bayes factor, as a measure of the evidence provided by the data in favor of one hypothesis against its alternative, can be highly sensitive to the prior distributions of parameters involved in the hypotheses as well as to the sample size. This may cause a noticeable difference between the Bayesian and classical (frequentist) hypothesis testing results. In the worst-case scenario, the two results are in conflict, which is termed the Jeffreys-Lindley paradox. In this article, we propose a sample size-dependent prior strategy to bridge the Bayesian-frequentist gap from a decision-theoretical perspective. The central idea behind the proposed strategy is to adaptively adjust prior distributions for the parameters in line with the sample size to manage the risk of type I error in Bayesian hypothesis testing at the same level as that prespecified in frequentist hypothesis testing. The proposed strategy is inspired by the work of Maurice Stevenson Bartlett (M.S. Bartlett, A comment on D. V. Lindley’s statistical paradox, Biometrika, 44, 533–534, 1957), who suggested a sample size-dependent prior to make the Bayes factor independent of the sample size. In contrast to his work, we propose a strategy that leverages the use of sample size-dependent priors in Bayesian hypothesis testing and risk management when deciding the two hypotheses. To demonstrate the effectiveness of the proposed strategy, normal mean tests in the cases that (i) the variance is known (z-test) and (ii) the variance is unknown (t-test) are examined. It turns out that the Bayesian testing results coming out from the proposed strategy become consistent with their frequentist counterparts and the Jeffreys-Lindley paradox disappears.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call