Abstract

This article considers sample size computation for designing multiple comparisons experiments. We propose sample size should be computed so that multiple comparisons confidence intervals will cover the true parameters and be sufficiently narrow with a guaranteed high probability. Appropriate formulas and computer implementation are provided for Tukey's method of all-pairwise multiple comparisons (MCA), multiple comparisons with the best (MCB) as proposed by the author, and Dunnett's method of multiple comparisons with a control (MCC). Our sample size computation is then compared with the usual computation based on the power of the F-test. An advantage of our method over the usual power-of-test method is that our method guarantees a high probability of correct multiple comparisons inference, while the latter does not, because the probability of rejecting a false null hypothesis includes the probability of directional error (inferring a treatment to be better than another when it is in fact worse). The graphical nature of our computer implementation also makes sensitivity analysis immediate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call