Abstract

The format of a survey question can affect responses. Branched survey scales are a question format that is increasingly used but little researched. It is unclear whether branched scales work in a different way than unbranched scales. Based on the decomposition principle ( Armstrong, Denniston, and Gordon 1975 ), if breaking a decision task up into component decision parts increases the accuracy of the final decision, one could imagine that breaking an attitudinal item into its component parts would increase the accuracy of the final report. In practice, this is applied by first asking the respondent the direction of their attitude, then using a follow-up question to measure the intensity of the attitude ( Krosnick and Berent 1993 ). A split-ballot experiment was embedded within the Understanding Society Innovation Panel, allowing for a comparison of responses between branched and unbranched versions of the same questions. Reliability and validity of both versions were assessed, along with the time taken to answer the questions in each format. In a total survey costs framework, this allows establishing whether any gains in reliability and validity are outweighed by additional costs incurred because of extended administration times. Findings show evidence of response differences between branched and unbranched scales, particularly a higher rate of extreme responding in the branched format. However, the differences in reliability and validity between the two formats are less clear cut. The branched questions took longer to administer, potentially increasing survey costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call