Abstract

Misreporting of sensitive characteristics in surveys is a major concern among survey methodologists and social scientists across disciplines. Indirect question formats, such as the Item Count Technique (ICT) and the Randomized Response Techniques (RRT), including the Crosswise Model (CM) and the Triangular Model (TM), have been developed to protect respondents’ privacy by design to elicit more truthful answers. These methods have also been praised to produce more valid estimates than direct questions. However, recent research has revealed a number of problems, such as the occurrence of false negatives, false positives, and dependencies on socioeconomic characteristics, indicating that at least some respondents may still cheat or lie when asked indirectly. This article systematically investigates (1) how well respondents comprehend and (2) to what extent they trust the ICT, CM and TM. We conducted cognitive interviews with academics across disciplines, investigating how respondents perceive, think about and answer questions on academic misconduct using these indirect methods. The results indicate that most respondents comprehend the basic instructions, but many fail to understand the logic and principles of these techniques. Furthermore, the findings suggest that comprehension and honest self-reports are unrelated, thus violating core assumptions about the effectiveness of these techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call