Abstract

Current discussions on improving the reproducibility of science often revolve around statistical innovations. However, equally important for improving methodological rigour is a valid operationalization of phenomena. Operationalization is the process of translating theoretical constructs into measurable laboratory quantities. Thus, the validity of operationalization is central for the quality of empirical studies. But do differences in the validity of operationalization affect the way scientists evaluate scientific literature? To investigate this, we manipulated the strength of operationalization of three published studies and sent them to researchers via email. In the first task, researchers were presented with a summary of the Method and Result section from one of the studies and were asked to guess the hypothesis that was investigated via a multiple-choice questionnaire. In a second task, researchers were asked to rate the perceived quality of the study. Our results show that (1) researchers are better at inferring the underlying research question from empirical results if the operationalization is more valid, but (2) the different validity is only to some extent reflected in a judgement of the study's quality. These results combined give partial corroboration to the notion that researchers’ evaluations of research results are not affected by operationalization validity.

Highlights

  • Subject Category: Psychology and cognitive neuroscience Subject Areas: statistics Keywords: operationalization, methodology, construct validity, measurement, meta science, replication crisis, reproducibility, replicability

  • Most participants indicated that they work in the field of Social Psychology (36%), followed by Clinical Psychology (19%), Cognitive Psychology (14%), Personality Psychology (9%), Other3 (9%), Experimental Psychology (8%), Methodology and Statistics (2%), Neuroscience (2%), Biological Psychology (0.7%) and Medicine (0.3%)

  • Our data are in line with our first hypothesis: we found that when we reduced the validity of the operationalization, researchers are less able to reverse engineer what the underlying hypothesis was

Read more

Summary

Introduction

Subject Category: Psychology and cognitive neuroscience Subject Areas: statistics Keywords: operationalization, methodology, construct validity, measurement, meta science, replication crisis, reproducibility, replicability. Our results show that (1) researchers are better at inferring the underlying research question from empirical results if the operationalization is more valid, but (2) the different validity is only to some extent reflected in a judgement of the study’s quality. The goal of this paper is not to investigate which studies have a valid operationalization, but to gather empirical evidence on the extent to which researchers consider the validity of operationalization when drawing conclusions about empirical findings. This is an important question as a study can lead to convincing statistical results, while not operationalizing the underlying concept well. The following study focuses on concepts, which are the basic blocks on which theories stand

Objectives
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.