Abstract

AbstractA challenge in academic and practitioner research is recruiting study participants that match target demographics, possess a desired skillset, and will participate for little to no compensation. An alternative to traditional participant recruitment struggles is crowdsourcing participants through online labor markets, such as Amazon Mechanical Turk (AMT). AMT is a platform that provides the tool for finding and recruiting participants with diverse demographics, skills, and experiences. This paper aims to demystify the use of crowdsourcing, and particularly AMT, by comparing the performance of traditionally recruited volunteers and AMT participants on tasks related to the evaluation of intelligent personal assistants (IPAs such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana). The comparison of AMT and non‐AMT samples indicated that while the two samples differed on demographics, their task performance was not significantly different. The paper discusses the costs and benefits of using AMT samples and would be of particular relevance to researchers who employ questionnaires and/or task‐specific data collection methods in their work.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.