Abstract

A statistical method called "magnitude-based inference" (MBI) has gained a following in the sports science literature, despite concerns voiced by statisticians. Its proponents have claimed that MBI exhibits superior type I and type II error rates compared with standard null hypothesis testing for most cases. I have performed a reanalysis to evaluate this claim. Using simulation code provided by MBI's proponents, I estimated type I and type II error rates for clinical and nonclinical MBI for a range of effect sizes, sample sizes, and smallest important effects. I plotted these results in a way that makes transparent the empirical behavior of MBI. I also reran the simulations after correcting mistakes in the definitions of type I and type II error provided by MBI's proponents. Finally, I confirmed the findings mathematically; and I provide general equations for calculating MBI's error rates without the need for simulation. Contrary to what MBI's proponents have claimed, MBI does not exhibit "superior" type I and type II error rates to standard null hypothesis testing. As expected, there is a tradeoff between type I and type II error. At precisely the small-to-moderate sample sizes that MBI's proponents deem "optimal," MBI reduces the type II error rate at the cost of greatly inflating the type I error rate-to two to six times that of standard hypothesis testing. Magnitude-based inference exhibits worrisome empirical behavior. In contrast to standard null hypothesis testing, which has predictable type I error rates, the type I error rates for MBI vary widely depending on the sample size and choice of smallest important effect, and are often unacceptably high. Magnitude-based inference should not be used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call