Abstract

In this study, I examine data-quality evaluation methods in online surveys and their frequency of use. Drawing from survey-methodology literature, I identified 11 distinct assessment categories and analyzed their prevalence across 3,298 articles published in 2022 from 200 psychology journals in the Web of Science Master Journal List. These English-language articles employed original data from self-administered online questionnaires. Strikingly, 55% of articles opted not to employ any data-quality evaluation, and 24% employed only one method despite the wide repertoire of methods available. The most common data-quality indicators were attention-control items (22%) and nonresponse rates (13%). Strict and unjustified nonresponse-based data-exclusion criteria were often observed. The results highlight a trend of inadequate quality control in online survey research, leaving results vulnerable to biases from automated response bots or respondents’ carelessness and fatigue. More thorough data-quality assurance is currently needed for online surveys.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.