Abstract

AbstractPublic decision‐makers increasingly rely on satisfaction surveys to inform budget and policy decisions. Yet, our knowledge of whether, and under what conditions, this input from public service users provides valid performance information remains incomplete. Using a preregistered split‐ballot experiment among government grant recipients in Denmark, this article shows that the ordering of survey questions can bias satisfaction measures even for highly experienced and professional respondents. We find that asking about overall satisfaction before any specific service ratings lowers overall user satisfaction, compared to the reverse order, while the correlations between specific ratings and overall satisfaction are relatively stable. Also, the question order effect outweighs that of a large‐scale embezzlement scandal, which unexpectedly hit the investigated government agency during the data collection. Our results support rising concerns that subjective performance indicators are susceptible to bias. We discuss how practitioners should approach satisfaction surveys to account for the risk of question order bias.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call