Some authors debate whether effect sizes should be reported (a) for all null hypothesis tests, even non–statistically significant ones, or (b) only after a finding is first determined to be statistically significant. The decision to report and interpret small effects may partially depend on the amount of bias in the effect size measure used. Based on the recognitions that variance-accounted-for effect statistics are positively biased and that standardized difference effect sizes such as Cohen’s d can be converted into r2 metrics and vice versa, the authors considered that d also may be biased. The authors therefore explored the amount of bias in Cohen’s d across a series of simulated study conditions. Results from their simulations indicated relatively no bias (close to zero) in Cohen’s d across all study conditions.
Read full abstract