Abstract

Assessing individual-level theories of electoral participation requires survey-based measures of turnout. Yet, due to a combination of sampling problems and respondent misreporting, postelection surveys routinely overestimate turnout, often by large margins. Using an online survey experiment fielded after the 2015 British general election, we implement three alternative survey questions aimed at correcting for turnout misreporting and test them against a standard direct turnout question used in postelection studies. Comparing estimated to actual turnout rates, we find that while all question designs overestimate aggregate turnout, the item-count technique alleviates the misreporting problem substantially, whereas a direct turnout question with additional face-saving options and a crosswise model design help little or not at all. Also, regression models of turnout estimated using the item-count measure yield substantively similar inferences regarding the correlates of electoral participation to models estimated using “gold-standard” validated vote measures. These findings stand in contrast to those suggesting that item-count techniques do not help with misreporting in an online setting and are particularly relevant given the increasing use of online surveys in election studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call