Abstract

Sensory and consumer testing are primary strategies companies use to collect insights about consumer products. However, frequently, unreliable results from these tests can lead to failed product launches which can have enormous financial and/or brand equity consequences. One potential reason is sensory tasks are often deliberately devoid of additional stimulation which reasonably has the possibility to reduce a panelist’s engagement with the task. Engagement may drive involvement and focus during testing and help generate reliable data. The overall objective of this study was to develop (Study 1), refine (Study 2), and validate (Studies 3–5) an engagement questionnaire (EQ), a tool to empirically assess varying engagement levels with a task. Prior literature and an exploratory qualitative assessment through an online platform (Study 1, N = 87) were used to operationalize the dimensionality of engagement and develop a preliminary questionnaire consisting of 54 items covering 7 themes (ability, attention, passivity, involvement, interest, motivation, and relevance). To assess the effectiveness of the initial item set, participants recalled being engaged with a task and answered the preliminary EQ (Study 2, N = 186). Results collected underwent an exploratory factor analysis (EFA) in order to explore the dimensionality and refine the scale by strategically eliminating items. To validate the tool, the EQ was distributed following numerous consumer sensory tests that spanned a variety of food and beverage products, test lengths, number, and types of questions (Studies 3–5). EFA was conducted in Study 3 (N = 774), which resulted in a three-factor, 10-item model with good reliability (α = 0.856). The new model was subsequently validated in two additional consumer sensory studies (Study 4A, N = 416 and Study 4B, N = 446). Both validation studies produced acceptable to good model fit indices with an RMSEA below 0.07, SRMR below 0.04, both CFI and TLI above 0.95 and good reliability (α > 0.850). When tested in a different sensory testing facility, the EQ produced acceptable model fit indices (Study 5, N = 247) and tracked as expected with participants’ perceived workload collected using the NASA Task Load Index; confirming the content and convergent and divergent validity of the instrument. Such an instrument allows for deeper understanding of panelist engagement and its impact on data quality, enabling the design of sensory tests capable of generating more reliable data both within the field of food science and beyond.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call