Abstract

A recent article reported difficulty in replicating psychological findings and that training and other moderators were relatively unimportant in predicting replication effect sizes. Using an objective measure of research expertise (number of publications), we found that expertise predicted larger replication effect sizes. The effect sizes selected and obtained by high-expertise replication teams was nearly twice as large as that obtained by low-expertise teams, particularly in replications of social psychology effects. Surprisingly, this effect seemed to be explained by experts choosing studies to replicate that had larger original effect sizes. There was little evidence that expertise predicted avoiding red flags (i.e. the troubling trio) or studies that varied in execution difficulty. However, experts did choose studies that were less context sensitive. Our results suggest that experts achieve greater replication success, in part, because they choose more robust and generalizable studies to replicate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.