Abstract

Researchers have increasingly warned about "p-hacking" and the improper use of control variables. This paper considers the risk that a "researcher's degrees of freedom" with respect to the use of control variables has on the probability of Type I and Type II errors. We also examine the extent that control variables can make marginal effect sizes (i.e., nonzero effects but less than should be statistically significant) appear significant (which we refer to as Type III errors) and how much control variable use can increase effect sizes. We report the results of two computer simulations that include up to 10 control variables. We find that the inappropriate use of control variables is not really a risk for Type I errors, given that the chance of there being truly a null effect in a typical multivariate analysis is very low. We also show that the use of control variables does not have a large effect on Type II errors, and that the practice of running analyses both with and without control variables will most often yield the same conclusion in both analyses. That said, we did find that p-hacking substantially increases the probability of inappropriately being able to detect statistical significance and can notably increase effect sizes. The practice of running analyses both with and without control variables can indeed reveal the potential for p-hacking, and discrepant results between bivariate and multivariate analyses suggest that authors need to carefully and clearly explain why the noted differences are theoretically and logically appropriate. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call