Abstract
Multisite (multilab/many-lab) replications have emerged as a popular way of verifying prior research findings, but their record in social psychology has prompted distrust of the field and a sense of crisis. We review all 36 multisite social-psychology replications (plus three articles reporting multiple ministudies). We start by assuming that both the original and the multisite replications were conducted in honest and diligent fashion, despite often yielding different conclusions. Four of the 36 (11%) were clearly successful in terms of providing significant support for the original hypothesis, and five others (14%) had mixed results. The remaining 27 (75%) were failures. Multiple explanations for the generally poor record of replications are considered, including the possibility that the original hypothesis was wrong; operational failure; low engagement of participants; and bias toward failure. The relevant evidence is assessed as well. There was evidence for each of the possibilities listed above, with low engagement emerging as a widespread problem (reflected in high rates of discarded data and weak manipulation checks). The few procedures with actual interpersonal interaction fared much better than others. We discuss implications in relation to manipulation checks, effect sizes, and impact on the field and offer recommendations for improving future multisite projects.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.