Abstract

Social science researchers increasingly recruit participants through Amazon's Mechanical Turk (MTurk) platform. Yet, the physical isolation of MTurk participants, and perceived lack of experimental control have led to persistent concerns about the quality of the data that can be obtained from MTurk samples. In this paper we focus on two of the most salient concerns—that MTurk participants may not buy into interactive experiments and that they may produce unreliable or invalid data. We review existing research on these topics and present new data to address these concerns. We find that insufficient attention is no more a problem among MTurk samples than among other commonly used convenience or high-quality commercial samples, and that MTurk participants buy into interactive experiments and trust researchers as much as participants in laboratory studies. Furthermore, we find that employing rigorous exclusion methods consistently boosts statistical power without introducing problematic side effects (e.g., substantially biasing the post-exclusion sample), and can thus provide a general solution for dealing with problematic respondents across samples. We conclude with a discussion of best practices and recommendations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call