Abstract
An experimental science relies on solid and replicable results. The last few years have seen a rich discussion on the reliability and validity of psychological science and whether our experimental findings can falsify our existing theoretical models. But concerns have also arisen that this movement may put a halt to theoretical developments. In this article, we re-analyze the data from an article published in this journal that concluded that lab site did not matter as predictor for Stroop performance, and, therefore, that context was likely to matter little in predicting the outcome of the Stroop task. The authors challenge this conclusion via a new analytical method -- supervised machine learning -- that “lets the data speak”. The authors apply this approach to the Stroop task from Many Labs 3 to illustrate the utility of machine learning, and find surprising results. The authors discuss differences with some conclusions of the original article, which variables need to be controlled for in future inhibitory control tasks, and why psychologists can use machine learning to find surprising, yet solid, results in their own data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.