Abstract
Data from psychological experiments pose a causal generalization paradox. Unless the experimental results have some generality, they contribute little to scientific knowledge. Yet, because most experiments use convenience samples rather than probability-based samples, there is almost never a formal justification, or set of rigorous guidelines, for generalizing the study's findings to other populations. This article discusses the causal generalization paradox in the context of outcome findings from experimental evaluations of psychological treatment programs and services. In grappling with the generalization paradox, researchers often make misleading (or at least oversimplified) assumptions. The article analyzes 10 such assumptions, including the belief that a significant experimental treatment effect is likely to be causally generalizable and the belief that the magnitude of a significant experimental effect provides a sound effect size estimate for causal generalization. The article then outlines 10 constructive strategies for assessing and enhancing causal generality. They include strategies involving the scaling level of outcome measures, variable treatment dosages, effectiveness designs, multiple measures, corroboration from observational designs, and the synthesis of multiple studies. Finally, the article's discussion section reviews the conditions under which causal generalizations are justified.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.