In psycholinguistic research, reaction time data is traditionally obtained in person via laboratory-based experiments. Recently, data gathered online, whether through undergraduate students completing tasks at home or through recruitment of anonymous crowdsourced workers, has been found to reliably replicate well-established L1 reaction time effects. However, because the reliability of L2 data obtained online has yet to be established, this study investigated the replicability, data quality, and logistical difficulties experienced in two self-paced reading online data gathering contexts—crowdsourcing and online students —and compared them to an in-person baseline from a previous study. Forty crowdsourced workers and 34 undergraduates completed a self-paced reading experiment online and results were compared with data from 44 in-person participants. Mixed-effect models showed that in-person effects were replicated in crowdsourced and student online data. However, data quality from students online was found to be inferior to crowdsourced data. In-person data gathering posed the most logistical constraints, followed by students online, with crowdsourcing being the most facilitative. However, the crowdsourcing sample in the present study skewed toward higher L2 proficiency as crowdsourcing platforms do not facilitate participant exclusion by proficiency. As such, researchers studying proficiency effects may want to opt for in-person or students online data gathering methods.
Read full abstract