Abstract

Science is not just based on interesting findings. One of the key elements in research reports is an informative description of the methods that were used to develop the experiment, to collect the data, and to analyze them. Because the scientific enterprise rests on replication, reporting of methods is essential, as it allows others to stage a similar experiment, to design an identical study, and to evaluate whether the findings can be replicated. If they cannot be reproduced, the outcomes from that previous experiment, or trial, will gradually or immediately lose credibility. In the past 2 decades, science has stumbled into a severe replication crisis. Most severely struck were the social sciences, with psychology at the center, and social psychology most of all (1). The findings from exercises to replicate research that has led to influential findings are dramatic. A 2015 report from a large consortium of scientists on 100 attempts to replicate as many psychology experiments showed that, although 97 of the original studies produced statistically significant results, only 36% did so in the attempts at replication (2). However, the replication crisis was not limited to psychology. It has also struck clinical medicine and preclinical research. When John Ioannidis analyzed 49 articles in high-impact journals that had generated >1000 citations, he observed that 45 of these had claimed that an intervention had efficacy. Yet 7 were later contradicted by subsequent research, and the effect was smaller in later studies for 7 others. In all 14 cases, subsequent studies were either larger or of a stronger design (3). Several factors contribute to this massive replication crisis. …

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call