Researchers in many areas of psychology and neuroscience have grown concerned with what has been referred to as a of replication and reliability in the field. These con- cerns have cut across a broad range of disciplines and have been raised in both the biomedical (Ioannidis, 2005, 2011)and psychological (Pashler & Harris, 2012; Simmons, Nelson, & Simonsohn, 2011) sciences. A number of reasons have been put forth for these concerns about replication, including con- flicts of interest (Bakker & Wicherts, 2011; Ioannidis, 2011), misaligned incentives, questionable research practices (John, Loewenstein, & Prelec, 2012) that result in what has been referred to as p-hacking (Simmons et al., 2011), and ubiq- uitous low power (Button et al., 2013). Such problems lead to the publication of inflated effect sizes (Masicampo & Lalande, 2012; Vul & Pashler, 2012) and produce a higher incidence of false positives.One could read this emerging literature and walk away disheartened by the state of many scientific fields-and per- haps of science in general. Alternatively, one could take this as an opportunity to step back and develop new procedures and methods for tackling at least some of the problems contribut- ing to the crisis of replication, whether real or perceived. That is what is attempted in this special issue of Cognitive, Affec- tive, & Behavioral Neuroscience, with a specific emphasis on studies that use functional neuroimaging to understand the neural mechanisms that support a range of cognitive and affective processes. The articles in this special issue fall into three general categories: (1) research into the importance and influence of methods choices and reporting; (2) assessments of reliability and novel approaches to statistical analysis; and (3) studies of power, sample size, and the importance of both false positives and false negatives.In terms of methodological concerns, it is important to note that studies using functional neuroimaging methods to cognitive and affective processes need to be concerned with all of the same issues that apply to any behavioral of a psychological process. As Plant et al. describe in their article in this issue, concerns about the timing of stimulus presenta- tion and response collection in studies of psychological pro- cesses may contribute to replication difficulties, and these authors suggest some ways to assess this possibility and potentially correct it. In addition, studies in cognitive and affective neuroscience are subject to the same concerns about transparency in the reporting of methods as are behavioral studies, including all of the problematic behaviors that Simmons and others have suggested lead to spurious rejection of the null hypothesis (Simmons et al., 2011). These concerns led Simmons and colleagues to propose that all authors be required to include the following 21-word phrase in their Method sections: We report how we determined our sample size, all data exclusions (if any), all manipulations and all measures in the study (Simmons, Nelson, & Simonsohn, 2012). Although the inclusion of such a statement has yet to be widely adopted by journals, it highlights the critical issue of transparency in method reporting. One of the major concerns in the field of functional neuroimaging-whether reporting functional magnetic resonance imaging (fMRI), event-related potentials (ERP), transcranial magnetic stimulation (TMS), or other techniques-is the plethora of analysis choices that a researcher can make, and the influence that these different choices clearly have on the results. Poldrack outlined this concern in his earlier work and suggested guidelines for reporting method choices that could influence outcomes (Poldrack et al., 2008). In another prior work, Carp (2012) reviewed and summarized the failure of many researchers to follow these guidelines or to report on key methodological details that could influence replication. …
Read full abstract