Abstract

Gaining valid answers to so-called sensitive questions is an age-old problem in survey research. Various techniques have been developed to guarantee anonymity and minimize the respondent’s feelings of jeopardy. Two such techniques are the randomized response technique (RRT) and the unmatched count technique (UCT). In this study the authors evaluate the effectiveness of different implementations of the RRT (using a forced-response design) in a computer-assisted setting and also compare the use of the RRT to that of the UCT. The techniques are evaluated according to various quality criteria, such as the prevalence estimates they provide, the ease of their use, and respondent trust in the techniques. The results indicate that the RRTs are problematic with respect to several domains, such as the limited trust they inspire and nonresponse, and that the RRT estimates are unreliable due to a strong false no bias, especially for the more sensitive questions. The UCT, however, performed well compared to the RRTs on all the evaluated measures. The authors conclude that the UCT is a promising alternative to RRT in self-administered surveys and that future research should be directed toward evaluating and improving the technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call