Abstract

Self-administered online surveys provide a higher level of privacy protection to respondents than surveys administered by an interviewer. Yet, studies show that asking sensitive questions is problematic also in self-administered mode. Because respondents might not be willing to reveal the truth and provide answers that are subject to social desirability bias, the validity of prevalence estimates of sensitive behaviors gained via online surveys can be challenged. A wellknown method to combat these problems is the Randomized Response Technique (RRT). However, convincing evidence that the RRT provides more valid estimates than direct questioning in online mode is still lacking. Moreover, an alternative approach called the Crosswise Model (CM) has recently been suggested to overcome some of the deficiencies of the RRT. We therefore conducted an experimental study in which different implementations of the RRT and the CM have been tested and compared to direct questioning. Our study is a large-scale online survey on sensitive behaviors by students such as cheating in exams and paper plagiarism. The results of the study reveal poor per-formance of the RRT, while the CM yielded significantly higher estimates of sensitive behaviors than direct questioning. We conclude that the CM is a promising approach for asking sensitive questions in self-administered surveys.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.