Abstract

BackgroundPaired preference tests are one of several types of measurement of consumer acceptance. Yet, the test has had issues regarding its statistical analysis. Initially, test designs were ‘forced choice’, without a ‘No Preference’ option. Accordingly, the data were limited. Later, putatively identical stimuli were used as controls. Consumers reported preferences for these stimuli. It was deemed not logically possible to have genuine preferences for identical stimuli, therefore these responses were assumed to be responses elicited in the ‘no preference’ condition. This provided the control condition for statistical comparison. It also allowed responses of ‘no preference’. Subsequent experimental designs were refinements of this approach. Scope and approachDifficulties with earlier forced choice designs are described and how statistical analysis changed with the introduction of control groups to represent the ‘no preference’ condition. It describes how the measurements were refined by supplementing frequency measures with d′ measures from signal detection theory. The factors causing consumers to report preferences for putatively identical stimuli are discussed. How these have spawned alternative protocols for paired preference tests are described. Key findings and conclusionsForced choice preference tests were adopted so that simple binomial statistics could be used. However, they did not allow consumers to express ‘no preference’. The introduction of control groups, using putatively identical stimuli, solved this problem. The supplementation of frequency measures with d′ measures allowed the use of signal detection protocols, which elicited more accurate measures of preference strength. This is still work in progress; further developments are expected.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call