Abstract

So-called “10-point” rating scales are one of the most commonly used measurement tools in survey research and have been used successfully with many types of constructs including items that ask respondents to rate their satisfaction with political leaders, the economy, and with their overall quality of life. However, the exact format of the 10-point response scales used has varied widely with some researchers using scales that run from 1–10 and others using scales that run from 0 to 10. In addition, the number of scale points assigned labels varies with some researchers labeling only the endpoints, others labeling the endpoints and scale midpoint, and still others labeling all of the scale points. Previous research (Andrews 1984; Cox 1980; Garratt et al. 2011; Schwarz et al. 1991) has sought to understand how response scales can influence the distribution of survey data and how the labeling and design of response scales influence the validity and reliability of survey data. Although the literature on response scales and their effects on survey data is extensive, scholars have yet to report investigations of the linkages between response scales and resulting item nonresponse. In particular, little is known about the impact of the format of the 10-point response scale on levels of item nonresponse in survey data. We seek to increase knowledge on this issue by reporting the results of two experimental studies that were designed to test whether the format of 10-point response scale used has a significant and nonignorable influence on item nonresponse and thus, on levels of data quality in random-digit-dial (RDD) surveys. In doing so, we argue that when designing a 10-point scale, researchers must consider not only the validity and reliability of the scale, but also the level of anticipated item nonresponse from the scale format.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call