Abstract

Alternative displays of effect size statistics can enhance the understandability and impact of validity evidence in a variety of applied settings. Arguably, the proliferation of alternative effect size statistics has been limited due to the lack of user-friendly tools to create them. Common statistical packages do not readily produce these alternative effect sizes and existing tools are outdated and inaccessible. In this paper, I introduce a free-to-use web-based calculator (https://dczhang.shinyapps.io/expectancyApp/) for generating alternative effect size displays from empirical data. This calculator requires no mathematical or programming expertise, and therefore, is ideal for academics and practitioners. I also present results from an empirical study that demonstrates the benefits of alternative effect size displays for enhancing lay people's perceived understandability of validity information and attitudes toward the use of standardized testing for college admissions.

Highlights

  • Traditional effect size indices, such as the correlation coefficient, are commonplace in the academic literature

  • This study extends the previous study in two ways: (1) in additional to BESD and Common Language Effect Sizes (CLES), I will examine the effect of the expectancy chart on validity communication; and (2) whereas Brooks and colleagues used theoretically derived effect sizes, this experiment will use empirically calculated effect size

  • The purpose of the study was to examine the effects of traditional (r and r2) and alternative (CLES, BESD, and Expectancy Chart) validity displays on their perceived comprehension and subsequent judgments toward the ACT

Read more

Summary

Introduction

Traditional effect size indices, such as the correlation coefficient, are commonplace in the academic literature. They are often difficult to understand and not translated into real-world outcomes. The practical utility of correlations are often obscured: critics of using the SAT as a college admissions test asserted that “the SAT only adds 5.4 percent of variance explained by HSGPA [high school grade point average] alone” (Kidder and Rosner, 2002), even though the same evidence was used to support its utility in college admission decisions (e.g., Kuncel and Hezlett, 2007). Effect size information—when communicated effectively—should be easy to understand and elucidate the practical impacts of interventions or relations it aims to represent

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call