Abstract

The power of research design in studies published in the Australian Journal of Science and Medicine in Sport (AJSMS: now the Journal of Science and Medicine in Sport) for the years 1996 and 1997 were analysed for their ability to detect small, medium, and large effects according to Cohen's (1988) conventions. Also examined were the reporting and interpreting of effect sizes and the control for experiment-wise (EW) Type I error rates. From the two years of articles, 29 studies were analysed, and power was computed on 108 different tests of significance. The median power of the studies to detect small, medium, and large effects were .14, .65 and .97, respectively. These results suggest that exercise and sport science research, at least as represented in AJSMS, is probably underpowered and may be limited in detecting small effects, has a better, but still underpowered, chance of detecting medium effects, and has adequate power principally for detecting large effects. The reporting of effect sizes was rare, and adequate interpretation of them was even rarer. The mean EW Type I error rate for all studies was .49. The analyses conducted suggest that much research in exercise science may have substantial Type I and Type II errors. An appeal is made for exercise scientists to conduct power analyses, control for EW error, exercise caution in the interpretation of nonsignificant results, and examine, report, and interpret effect sizes rather than solely rely on p values to determine whether significant changes occurred or significant relationships exist.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call