Abstract

Crowdsourcing has emerged as a preferred data collection methodology for advertising and social science researchers because these samples avoid the higher costs associated with professional panel data. Yet, there are ongoing concerns about data quality for online sources. This research examines differences in data quality for an advertising experiment across five popular online data sources, including professional panels and crowdsourced platforms. Effects of underlying mechanisms impacting data quality, including response satisficing, multitasking, and effort, are examined. As proposed, a serial mediation model shows that data source is, directly and indirectly, related to these antecedents of data quality. Satisficing is positively related to multitasking and negatively related to effort, and both mediators (in parallel) extend to data quality, indicating that the indirect effects on data quality through these mediating variables are significant. In general, a vetted MTurk sample (i.e., CloudResearch Approved) produces higher quality data than the other four sources. Regardless of the data source, researchers should utilize safeguards to ensure data quality. Safeguards and other strategies to obtain high-quality data from online samples are offered.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.