Abstract

Self-administered online surveys provide a higher level of privacy protection to respondents than surveys administered by an interviewer. Yet, studies show that asking sensitive questions is problematic also in self-administered mode. Because respondents might not be willing to reveal the truth and provide answers that are subject to social desirability bias, the validity of prevalence estimates of sensitive behaviors gained via online surveys can be challenged. A well-known method to combat these problems is the Randomized Response Technique (RRT). However, convincing evidence that the RRT provides more valid estimates than direct questioning in online mode is still lacking. Moreover, an alternative approach called the Crosswise Model (CM) has recently been suggested to overcome some of the deficiencies of the RRT. In the context of an online survey on plagiarism and cheating on exams among students of two Swiss universities (N = 6,494), we tested different implementations of the RRT and the CM and compared them to direct questioning using a randomized experimental design. Results reveal a poor performance of the RRT, which failed to elicit higher prevalence estimates than direct questioning. Using the CM however, significantly higher prevalence estimates were obtained making it a promising new alternative to the conventional RRT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call