Abstract

We investigated whether statistical-reporting inconsistencies could be avoided if journals implement the tool statcheck in the peer-review process. In a preregistered pretest–posttest quasi-experiment covering more than 7,000 articles and more than 147,000 extracted statistics, we compared the prevalence of reported p values that were inconsistent with their degrees of freedom and test statistics in two journals that implemented statcheck in their peer-review process ( Psychological Science and Journal of Experimental and Social Psychology) and two matched control journals ( Journal of Experimental Psychology: General and Journal of Personality and Social Psychology) before and after statcheck was implemented. Preregistered multilevel logistic regression analyses showed that the decrease in both inconsistencies and decision inconsistencies around p = .05 is considerably steeper in statcheck journals than in control journals, offering preliminary support for the notion that statcheck can be a useful tool for journals to avoid statistical-reporting inconsistencies in published articles. We discuss limitations and implications of these findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call