Not so long ago, in the early days of experimental psychology, researchers performed their data analyses by hand (or paid people to do these calculations for them). No wonder, they took ample time beforehand to think through which analyses to perform (and especially which not). Exploring data was not really an option. The t value or F value that resulted from the calculations provided a more or less definitive answer. H0 could be rejected or not. During the past decades, we have quickly moved from punch cards, via expensive mainframes, to relatively cheap but powerful personal computers that are fulltime available to individual researchers and students. Nowadays, SPSS and other user-friendly statistical packages make it easy to run all types of analyses, which encourage analyzing your data in every possible way. It is stunningly easy to (re-)run analyses with or without a specific group of participants, with or without an additional between-subject condition, with or without an extra variable in a scale. Indeed, it may be a waste of resources to leave a dataset partly unexplored when SPSS offers all the tools to do so. In many research labs, analyzing results has shifted from spending days to do confirmatory analyses by hand, to spending days engaging in all kinds of data analyses to see what’s in the data.” Is this a bad thing? We do not think so, necessarily. Given that you know what you are doing in SPSS, you can learn a lot from exploring a dataset. However, there clearly are risks (see Simmons, Nelson & Simonsohn, 2011), which we will revisit later. This contribution is a commentary on Playing with Data—Or How to Discourage Questionable Research Practices and Stimulate Researchers to Do Things Right” by Sijtsma (2015). In his paper, Klaas Sijtsma makes the important point that prevention of research practices may be more important than detection. He suggests two policy measures. First, make research data and research materials publicly available. Second, encourage researchers to consult methodologists or statisticians for help and a second opinion. In this commentary, we will not reiterate the points made by Klaas Sijtsma. We support his analysis and his policy recommendations. Making research data and research materials publicly available and consulting methodologists and statisticians will help in increasing the transparency of the data analysis phase of the empirical research cycle within psychology. Currently, in too many cases, this phase is the exclusive domain of (small groups of) individual researchers and their personal computers. However, making the data analysis phase transparent to others will not prevent scientific fraud. A person who willingly aims to fool others always will be able to do so, also when sharing (well-faked) data. Nevertheless, it will help in reducing the errors researchers may make unwittingly due to incorrect methodological decisions and the incorrect use of statistical methods (see Sijtsma). The current contribution focuses on what we think is quickly becoming a misnomer in our field, namely the use of the phrase questionable research practices” for certain types of practices when analyzing and reporting data. We suggest to replace this with the, in our view, more accurate phrase questionable reporting practices,” in relation to most potentially problematic data analysis strategies. The abbreviation may remain the same, namely QRPs. However, the meaning and use of the phrase will be more positive and more encouraging to junior and senior researchers, and will do more justice to the important distinction between confirmation and exploration when analyzing data and reporting results (see Wagenmakers, 2012).
Read full abstract