Abstract

There is a disciplinary assumption in our field that surveys with low response rates produce biased estimates, which leads to the use of simple rules for judging the quality of survey data. Surveys with “low” response rates fail this “response rate test” and become difficult to publish. Most of our research methods texts list these rules: e.g., “A response rate below 60% is a disaster, and even a 70% response rate is not much more than minimally acceptable”. Editors embrace this view, and often reject out of hand any study failing to reach this conventional standard. We argue that our field’s use of response rate rules in evaluating scholarship is based more on disciplinary custom than on survey science. In this paper, we describe the long-term downward trend in response rates and address confusion about nonresponse bias and its relation to response rates. Using Groves and Peytcheva’s (2008) meta-analytic data, we present evidence about the magnitude of the estimate- and study-level relationships between response rates and two different measures of nonresponse bias in univariate estimates. We then discuss several consequences of using the “response rate test” to judge data quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.