Abstract

Statistical tests are used to determine the probability that an association between independent and dependent variables observed in a study sample represents an association that exists in a larger, target population. All too frequently, however, investigators conduct statistical “fishing expeditions” to increase the likelihood of “catching” significant differences across multiple experimental groups, but fail to consider the effects these practices have on experiment-wise error rates. As with fishing, it is important to impose limits on the number of catches that can be considered “keepers.” In statistical testing, these limits are expressed as type I and type II decision errors. Some statistical techniques to decrease type II errors, which increases statistical power, while simultaneously controlling for type I errors are described. A priori techniques discussed include planned contrasts and Dunn's test. For post hoc data analyses, tests based on range statistics, including Tukey's honestly significant difference (HSD) test, the Newman-Keuls procedure, Dunnett's test, and the Schaffé approach, are considered, and examples of their use are provided. These techniques involve different trade-offs between type 1 and type II error rates. We discuss the investigator's responsibility to consider relevant ethical and theoretical considerations, as well as experimental concerns, when determining an appropriate compromise between these two types of decision errors. We conclude our description of multiple statistical comparison procedures with recommendations for which test to use under specific experimental conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call