Abstract

Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study’s aims were to 1) describe the distribution of effect sizes across multiple instruments, 2) consider factors qualifying the effect size distribution, and 3) identify examples as benchmarks for various effect sizes. For aim one, effect size distributions were illustrated from a large, diverse sample of 9/10-year-old children. This was done by conducting Pearson’s correlations among 161 variables representing constructs from all questionnaires and tasks from the Adolescent Brain and Cognitive Development Study® baseline data. To achieve aim two, factors qualifying this distribution were tested by comparing the distributions of effect size among various modifications of the aim one analyses. These modified analytic strategies included comparisons of effect size distributions for different types of variables, for analyses using statistical thresholds, and for analyses using several covariate strategies. In aim one analyses, the median in-sample effect size was .03, and values at the first and third quartiles were .01 and .07. In aim two analyses, effects were smaller for associations across instruments, content domains, and reporters, as well as when covarying for sociodemographic factors. Effect sizes were larger when thresholding for statistical significance. In analyses intended to mimic conditions used in “real-world” analysis of ABCD data, the median in-sample effect size was .05, and values at the first and third quartiles were .03 and .09. To achieve aim three, examples for varying effect sizes are reported from the ABCD dataset as benchmarks for future work in the dataset. In summary, this report finds that empirically determined effect sizes from a notably large dataset are smaller than would be expected based on existing heuristics.

Highlights

  • In its contemporary usage, the term “effect size” refers to a standardized index of the strength or magnitude of an association between two variables or the size of difference between two groups [1, 2]

  • The current report describes the distribution of effect sizes throughout the questionnaire and task data collected as part of the Adolescent Brain Cognitive Development (ABCD) study

  • Given that the majority of research questions in contemporary psychology/psychiatry tend to explore the relationship between distinct constructs, it is likely that the between domain/instrument/reporter effect size distributions are more representative of effect sizes that would be observed in most contemporary research

Read more

Summary

Introduction

The term “effect size” refers to a standardized index of the strength or magnitude of an association between two variables or the size of difference between two groups [1, 2]. Meyer and colleagues [6] lay out benchmarks for effects of various sizes helping researchers develop better concrete understanding about effect sizes using examples from daily life (e.g., antihistamine use and runny nose, r = .11). Another approach to improving our conceptualization of effect sizes is to consider the distribution of effect sizes observed across a broad range of social science applications. Low power is an issue that permeates psychological research and its adjacent fields and is known to create over-estimates of effect size, operating through what is called the “winner’s curse” [9,10,11]. This suggests there are numerous processes related to the conduct and publication of psychological research that are likely creating artificially high effect sizes throughout the psychological literature

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call