Abstract
Like many other areas of science, experimental psychology is affected by a “replication crisis” that is causing concern in many fields of research. Approaches to tackling this crisis include better training in statistical methods, greater transparency and openness, and changes to the incentives created by funding agencies, journals, and institutions. Here, I argue that if proposed solutions are to be effective, we also need to take into account human cognitive constraints that can distort all stages of the research process, including design and execution of experiments, analysis of data, and writing up findings for publication. I focus specifically on cognitive schemata in perception and memory, confirmation bias, systematic misunderstanding of statistics, and asymmetry in moral judgements of errors of commission and omission. Finally, I consider methods that may help mitigate the effect of cognitive constraints: better training, including use of simulations to overcome statistical misunderstanding; specific programmes directed at inoculating against cognitive biases; adoption of Registered Reports to encourage more critical reflection in planning studies; and using methods such as triangulation and “pre mortem” evaluation of study design to foster a culture of dialogue and criticism.
Highlights
The past decade has been a bruising one for experimental psychology
The publication of a paper by Simmons, Nelson, and Simonsohn (2011) entitled “False-positive psychology” drew attention to problems with the way in which research was often conducted in our field, which meant that many results could not be trusted
If an untreated control group is studied over the same period, we find very similar rates of improvement (Wake et al, 2011)—presumably due to factors such a spontaneous resolution of problems or regression to the mean, which will lead to systematic bias in outcomes
Summary
The past decade has been a bruising one for experimental psychology. The publication of a paper by Simmons, Nelson, and Simonsohn (2011) entitled “False-positive psychology” drew attention to problems with the way in which research was often conducted in our field, which meant that many results could not be trusted. I suspect another reason why people tend to underrate the seriousness of p-hacking is because it involves an error of omission (failing to report the full context of a p-value), rather than an error of commission (making up data).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.