Abstract

Social desirability and the fear of negative consequences often deter a considerable share of survey respondents from responding truthfully to sensitive questions. Thus, resulting prevalence estimates are biased. Indirect techniques for surveying sensitive questions such as the Randomized Response Technique are intended to mitigate misreporting by providing complete concealment of individual answers. However, it is far from clear whether these indirect techniques actually produce more valid measurements than standard direct questioning. In order to evaluate the validity of different sensitive question techniques we carried out an online validation experiment at Amazon Mechanical Turk in which respondents' self-reports of norm-breaking behavior (cheating in dice games) were validated against observed behavior. This document describes the design of the validation experiment and provides details on the questionnaire, the different sensitive question technique implementations, the field work, and the resulting dataset. The appendix contains a codebook of the data and facsimiles of the questionnaire pages and other survey materials.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call