Abstract

psychological science, the field, continue to struggle with the challenge of establishing interesting and important and replicable phenomena. As I often tell my students, “If scientific psychology was easy, everyone would do it.” We can take some comfort in knowing that other sciences, too, face similar challenges (e.g., Begley & Ellis, 2012). But our business is with psychology. In August of this year, Science published a fascinating article by Brian Nosek and 269 coauthors (Open Science Collaboration, 2015). They reported direct replication attempts of 100 experiments published in prestigious psychology journals in 2008, including experiments reported in 39 articles in Psychological Science. Although I expect there is room to critique some of the replications, the article strikes me as a terrific piece of work, and I recommend reading it (and giving it to students). For each experiment, researchers prespecified a benchmark finding. On average, the replications had statistical power of .90+ to detect effects of the sizes obtained in the original studies, but fewer than half of them yielded a statistically significant effect. As Nosek and his coauthors made clear, even ideal replications of ideal studies are expected to fail some of the time (Francis, 2012), and failure to replicate a previously observed effect can arise from differences between the original and replication studies and hence do not necessarily indicate flaws in the original study (Maxwell, Lau, & Howard, 2015; Stroebe & Strack, 2014). Still, it seems likely that psychology journals have too often reported spurious effects arising from Type I errors (e.g., Francis, 2014).... Language: en

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call