Abstract

The use of crowdsourced data has become extremely popular in marketing and public policy research. However, there are concerns about the validity of studies that source data from crowdsourcing platforms such as Amazon Mechanical Turk (MTurk). Using five different online sample sources, including multiple MTurk samples and professionally managed panels, the authors address issues related to online data quality and its effects on results for a policy-based 2 × 2 between-subjects experiment. They show that survey response satisficing, as well as multitasking, is related to attention check performance measures beyond demographic differences, and there are substantial differences across the five different online data sources. The authors specifically identify segments of high and low response satisficers using a multi-item measure and show that there are critical differences in the policy-relevant results of the experiment for these segments of online respondents. Findings suggest implications for concerns about failures to replicate results in the policy and consumer well-being, business, and social science literatures. The authors offer some suggestions for attempting to reduce problematic effects of response satisficing and data quality that are shown to differ substantially across the sample sources examined.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.