Abstract

We examine 244 independent tests of interaction effects published in recent issues of four leading journals in the organizational sciences in order to estimate the replicability of reported statistically significant interaction effects. A z-curve analysis (Brunner & Schimmack, 2020) of the distribution of p values indicates an estimated replicability of 37%, although this figure varied somewhat across the four journals. We also find that none of the coded studies reported having conducted a priori power analyses and that only one reported having preregistered their hypotheses-despite longstanding exhortations for researchers to plan their studies to have adequate power and to engage in open science practices. Our results suggest that moderation results that have been reported in these leading journals fail to meet the methodological and statistical burden that would lead us to recommend that scientists and practitioners rely on these findings to inform their research and practice. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.