Abstract

Abstract Several special questioning techniques have been developed in order to counteract misreporting to sensitive survey questions, for example, on criminal behavior. However, doubts have been raised concerning their validity and practical value as well as the strategy of testing their validity using the “more-is-better” assumption in comparative survey experiments. This is because such techniques can be prone to generating false positive estimates, that is, counting “innocent” respondents as “guilty” ones. This article investigates the occurrence of false positive estimates by comparing direct questioning, the crosswise model (CM), and the item count technique (ICT). We analyze data from two online surveys (N = 2,607 and 3,203) carried out in Germany and Switzerland. Respondents answered three questions regarding traits for which it is known that their prevalence in reality is zero. The results show that CM suffers more from false positive estimates than ICT. CM estimates amount to up to 15 percent for a given true value of zero. The mean of the ICT estimates is not significantly different from zero. We further examine factors causing the biased estimates of CM and show that speeding through the questionnaire (random answering) and problems with the measurement procedure—namely regarding the unrelated questions—are responsible. Our findings suggest that CM is problematic and should not be used or evaluated without the possibility of accounting for false positives. For ICT, the issue is less severe.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call