Abstract

Given the long history of discussion of issues surrounding statistical testing and effect size indices and various attempts by the American Psychological Association and by the American Educational Research Association to encourage the reporting of effect size, most journals in education and psychology have witnessed an increase in effect size reporting since 1999. Yet, effect size was often reported in three indices, namely, the unadjusted R 2, Cohen's d, and η2 with a simple labeling of small, medium, or large, according to Cohen's (1969) criteria. In this article, the authors present several alternatives to Cohen's d to help researchers conceptualize effect size beyond standardized mean differences for between-subject designs with two groups. The alternative effect size estimators are organized into a typology and are empirically contrasted with Cohen's d in terms of purposes, usages, statistical properties, interpretability, and the potential for meta-analysis. Several sound alternatives are identified to supplement the reporting of Cohen's d. The article concludes with a discussion of the choice of standardizers, the importance of assumptions, and the possibility of extending sound alternative effect size indices to other research contexts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.