Abstract

Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social desirability bias, a subset of what we label sensitivity bias. We make three contributions. First, we propose a social reference theory of sensitivity bias to structure expectations about survey responses on sensitive topics. Second, we explore the bias-variance trade-off inherent in the choice between direct and indirect measurement technologies. Third, to estimate the extent of sensitivity bias, we meta-analyze the set of published and unpublished list experiments (a.k.a., the item count technique) conducted to date and compare the results with direct questions. We find that sensitivity biases are typically smaller than 10 percentage points and in some domains are approximately zero.

Highlights

  • We find suggestive evidence of overreporting of voter turnout

  • To us, subjects appear to honestly report their prejudices based on race, religion, and sexual orientation

  • When list experiments or similar methods are selected, they should be conducted only with large samples or when biases are expected to be substantial

Read more

Summary

A SOCIAL REFERENCE THEORY OF SENSITIVITY BIAS

Why do questions about sensitive topics in surveys yield biased responses? We develop a social reference theory of sensitivity bias that distinguishes between the sensitivity of the topic and the properties of the measurement tool (typically self-reported responses to direct questions in sample surveys). Consider a study of N subjects with a true prevalence with 2,000 The intuition for this stark shortcoming of rate (π∗ ) in which the direct question has a sensitivity the list experiment is that only half the sample is asked bias (δ). In order for sensitivity to vary across groups, respondent beliefs about social referents—their preferences, their ability to monitor, or the costs they impose—must be heterogeneous In such cases, researchers can employ a list experiment to measure outcomes and estimate the difference in prevalence rates using an interaction model. We present a meta-analysis of list experiments to characterize the level of sensitivity bias in four political science literatures: turnout; prejudice based on race, religion, and sexual orientation; vote buying; and political attitudes in authoritarian contexts. The true position of each study relative to indifference between the two designs is better represented by its effective sample size, adjusting for any improvements to list experiment design and analysis implemented in that study

SUMMARY OF EMPIRICAL RESULTS
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call