Abstract

A frequentist confidence interval can be constructed by inverting a hypothesis test, such that the interval contains only parameter values that would not have been rejected by the test. We show how a similar definition can be employed to construct a Bayesian support interval. Consistent with Carnap’s theory of corroboration, the support interval contains only parameter values that receive at least some minimum amount of support from the data. The support interval is not subject to Lindley’s paradox and provides an evidence-based perspective on inference that differs from the belief-based perspective that forms the basis of the standard Bayesian credible interval.

Highlights

  • Background on the Bayes FactorThe Bayes factor quantifies the degree to which data y change the relative prior plausibility of two hypotheses to the relative posterior plausibility, as follows: p(H0 ∣ y) = p(H0) × p(y ∣ H0) . ⏟p(⏞H⏞⏟ 1 ∣⏞⏞y⏟) ⏟p⏟ (H1⏟) ⏟p(⏞y⏞⏟ ∣ H⏞⏞⏟1) (1)Relative posterior Relative prior Bayes factor uncertainty uncertainty BF01For concreteness, consider a binomial test between H0 ∶ 0 = 1∕2 vs. H1 ∶ ∼ Beta(2, 2)

  • The support interval is based on evidence—how the data change our beliefs— whereas the credible interval is based on the posterior beliefs directly

  • Lindley’s paradox states that one can simultaneously have a frequentist test at level reject the null hypothesis while at the same time the corresponding Bayesian test overwhelmingly supports the null hypothesis. Whereas this paradox is traditionally used to highlight the inevitable divergence of p values and Bayesian posterior probabilities for hypothesis testing, there is an alternative interpretation of the paradox as a warning against use of improper priors for Bayesian testing

Read more

Summary

Background on the Bayes Factor

The Bayes factor quantifies the degree to which data y change the relative prior plausibility of two hypotheses (say H0 and H1 ) to the relative posterior plausibility, as follows: p(H0 ∣ y). Which shows that the ratio of posterior to prior density for a parameter value is precisely equal to its predictive updating factor We can use this information to define an interval containing only those values of that receive a certain minimum level of corroboration from the data. Least k times better than average; these are values of that are associated with an updating factor p(y ∣ )∕p(y) ≥ k

Example
Comparison to the Credible Interval
A Likelihood Perspective
Conceptual Advantages of the Support Interval
Nuisance Parameters
Earlier Work
Findings
Concluding Comments
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call