Abstract

Online experiments are an alternative for researchers interested in conducting behavioral research outside the laboratory. However, an online assessment might become a challenge when long and complex experiments need to be conducted in a specific order or with supervision from a researcher. The aim of this study was to test the computational validity and the feasibility of a remote and synchronous reinforcement learning (RL) experiment conducted during the social-distancing measures imposed by the pandemic. An additional feature of this study was to describe how a behavioral experiment originally created to be conducted in-person was transformed into an online supervised remote experiment. Open-source software was used to collect data, conduct statistical analysis, and do computational modeling. Python codes were created to replicate computational models that simulate the effect of working memory (WM) load over RL performance. Our behavioral results indicated that we were able to replicate remotely and with a modified behavioral task the effects of working memory (WM) load over RL performance observed in previous studies with in-person assessments. Our computational analyses using Python code also captured the effects of WM load over RL as expected, which suggests that the algorithms and optimization methods were reliable in their ability to reproduce behavior. The behavioral and computational validation shown in this study and the detailed description of the supervised remote testing may be useful for researchers interested in conducting long and complex experiments online.Supplementary InformationThe online version contains supplementary material available at 10.3758/s13428-022-01982-6.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call