Abstract

Researchers often have informative hypotheses in mind when comparing means across treatment groups, such as H1 : μA < μB < μC and H2 : μB < μA < μC, and want to compare these hypotheses to each other directly. This can be done by means of Bayesian inference. This article discusses the disadvantages of the frequentist approach to null hypothesis testing and the advantages of the Bayesian approach. It demonstrates how to use the Bayesian approach to hypothesis testing in the setting of cluster-randomized trials. The data from a school-based smoking prevention intervention with four treatment groups are used to illustrate the Bayesian approach. The main advantage of the Bayesian approach is that it provides a degree of evidence from the collected data in favor of an informative hypothesis. Furthermore, a simulation study was conducted to investigate how Bayes factors behave with cluster-randomized trials. The results from the simulation study showed that the Bayes factor increases with increasing number of clusters, cluster size, and effect size, and decreases with increasing intraclass correlation coefficient. The effect of the number of clusters is stronger than the effect of cluster size. With a small number of clusters, the effect of increasing cluster size may be weak, especially when the intraclass correlation coefficient is large. In conclusion, the study showed that the Bayes factor is affected by sample size and intraclass correlation similarly to how these parameters affect statistical power in the frequentist approach of null hypothesis significance testing. Bayesian evaluation may be used as an alternative to null hypotheses testing.

Highlights

  • Researchers often have informative hypotheses in mind when comparing means across treatment groups, such as H1 : μA < μB < μC and H2 : μB < μA < μC, and want to compare these hypotheses to each other directly

  • The common practice when comparing the mean outcomes of k > 2 treatment conditions is to test the omnibus null hypothesis H0 : μ1 = μ2 = ... = μk, by means of a oneway analysis of variance (ANOVA)

  • Once the data are collected, their likelihood function is combined with the prior distribution to get the posterior distribution. Both prior and posterior are required in order to calculate a so-called Bayes factor, which is a quantification of the degree of evidence in the collected data in favor of an informative hypothesis

Read more

Summary

Introduction

Researchers often have informative hypotheses in mind when comparing means across treatment groups, such as H1 : μA < μB < μC and H2 : μB < μA < μC, and want to compare these hypotheses to each other directly. Both prior and posterior are required in order to calculate a so-called Bayes factor, which is a quantification of the degree of evidence in the collected data in favor of an informative hypothesis (as compared to the unconstrained hypothesis Ha).

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call