Abstract

Abstract While the migration of public opinion surveys to online platforms has often lowered costs and enhanced timeliness, it has also created new vulnerabilities. Respondents completing the same survey multiple times from different IP addresses, overseas workers posing as Americans, and algorithms designed to complete surveys are among the threats that have emerged in this new era. This paper is an attempt to measure the prevalence of such respondents and their impact on survey data quality, while demonstrating methodological approaches for doing so. Prior studies typically examine just one platform and rely on closed-ended questions and/or paradata (e.g., IP addresses) to identify untrustworthy interviews. This is problematic because such data are relatively easy for bad actors to fake. We carried out a large-scale study with an eye toward overcoming these limitations. This study examines the threat of insincere respondents using large samples from six online platforms: three opt-in survey panels, two address-recruited survey panels, and a crowdsourced sample. Rather than relying solely on closed-ended responses, we incorporated an analysis of 375,834 open-ended answers. By their very nature, open-ended questions offer a more sensitive indicator of whether a respondent is genuine or not. The study found that the incidence of insincere respondents varied significantly by the type of online sample. Critically, insincere respondents did not just answer at random, but rather they tended to select positive answer choices, introducing a small, systematic bias into estimates like presidential approval. Two common data-quality checks failed to detect most insincere respondents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call