Abstract

A classical prospective power analysis estimates the chance of obtaining a statistically significant result. However, it does so with no regard to the reliability of the result. “Design analysis” is a complementary component of study planning which addresses that limitation. Monte Carlo simulations and innovative freeware were used to provide illustrations of common, potentially grave problems that make design analysis necessary. Five statements outline widely known background to those problems. (1) A regime of significance testing tends to engender publication bias. (2) Small-sample studies commonly have very low expected statistical power. (3) Many SLA (quasi-)experimental studies have used small samples. (4) A combination of publication bias and low average power seeds a research literature with findings that well-powered replication studies do not repeat. (5) Published estimates of true effects are often too high. As to the last statement, many SLA researchers may be unaware of the mathematical basis of the tendency for obtained significant estimates to be too high whenever expected power is not high, which it frequently is not. Indeed, if power is too low, a significant result can only be misleading: Any good estimate will be nonsignificant. The design analysis procedures used to illustrate the focal problems can also be used to estimate sample sizes required for adequate expected power along with good control of the risk of obtaining very misleading significant results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call