Abstract

Abstract In many post-election surveys, the proportion of respondents who claim to have voted is greater than government-reported turnout rates. These differences have often been attributed to respondent lying (e.g., Burden 2000). In a search for greater accuracy, scholars have replaced respondent self-reports of turnout with government records of their turnout (a.k.a. turnout validation). Some scholars have interpreted “validated” turnout estimates as more accurate than respondent self-reports because “validated” rates tend to be lower than aggregate self-reported rates and tend to be closer to government-reported rates. We explore the viability of turnout validation efforts. We find that several apparently viable methods of matching survey respondents to government records severely underestimate the proportion of Americans who were registered to vote. Matching errors that severely underestimate registration rates also drive down “validated” turnout estimates. As a result, when “validated” turnout estimates appear to be more accurate than self-reports because they produce lower turnout estimates, the apparent accuracy is likely an illusion. Also, among respondents whose self-reports can be validated against government records, the accuracy of self-reports is extremely high. This would not occur if lying was the primary explanation for differences between reported and official turnout rates. These findings challenge the notion that the practice of “turnout validation” offers a means of measuring turnout that is more accurate than survey respondents’ self-reports.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call