Abstract

This article discusses two meta-analyses on randomized response technique (RRT) studies, the first on 6 individual validation studies and the second on 32 comparative studies. The meta-analyses focus on the performance of RRTs compared to conventional question-and-answer methods. The authors use the percentage of incorrect answers as effect size for the individual validation studies and the standardized difference score (d-probit) as effect size for the comparative studies. Results indicate that compared to other methods, randomized response designs result in more valid data. For the individual validation studies, the mean percentage of incorrect answers for the RRT condition is .38; for the other conditions, it is .49. The more sensitive the topic under investigation, the higher the validity of RRT results. However, both meta-analyses have unexplained residual variances across studies, which indicates that RRTs are not completely under the control of the researcher.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call