Abstract

AbstractA challenge in academic and practitioner research is recruiting study participants that match target demographics, possess a desired skillset, and will participate for little to no compensation. An alternative to traditional participant recruitment struggles is crowdsourcing participants through online labor markets, such as Amazon Mechanical Turk (AMT). AMT is a platform that provides the tool for finding and recruiting participants with diverse demographics, skills, and experiences. This paper aims to demystify the use of crowdsourcing, and particularly AMT, by comparing the performance of traditionally recruited volunteers and AMT participants on tasks related to the evaluation of intelligent personal assistants (IPAs such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana). The comparison of AMT and non‐AMT samples indicated that while the two samples differed on demographics, their task performance was not significantly different. The paper discusses the costs and benefits of using AMT samples and would be of particular relevance to researchers who employ questionnaires and/or task‐specific data collection methods in their work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call