Abstract

AbstractThe pairwise A‐Not A design involves two stimuli presented multiple times in a block of trials: a reference stimulus (A) and a comparison stimulus (B). The combined A‐Not A design employs A and several levels of B in a block of trials. Both designs were compared using ice tea with five levels of sucrose. Six judges were assessed for sensitivity using the same number of trials in each design; including their overall sensitivity, their average sensitivity across four replicated blocks, and the variability in sensitivity across those blocks. The pairwise design gave higher mean sensitivity, but also higher variability, than the combined design. A secondary analysis considered fewer trials in the combined design, such that sensitivity estimates were based on the same number of trials as for the pairwise design. The pattern for sensitivity within each design did not change, but variability was now comparable. This suggests the combined design yields lower variation for an equivalent expenditure of resource, or that the combined design yields similar levels of variation with reduced resourcing. The trade‐off is getting slightly lower estimates of sensitivity. Additionally, the combined design produced sensitivity estimates significantly above chance performance levels when test stimuli were identical. However, the magnitudes of these estimates were small.Practical applicationsThe A‐Not A test is used in industrial research and development for a range of purposes. The results, expressed in sizes of difference between products, d′, and the variance of d′, give guidance for business decisions on whether products are similar enough to be distinguishable, or if they are perceivably different. The variance of d′ is typically predicted from theoretical models, but these do not consider real‐life variation. To study the difference between the real‐life variance and the variance predicted from the theoretical models, an empirical study was conducted in which many A‐Not A data were collected from a small group of subjects under controlled conditions. The results showed that the real‐life variance is larger than predicted, so it might be that small but significant differences in tests might not really be significant anymore if we take into account this real‐life variance. The comparison of paired versus combined designs generated insights on the effect of test design on the outcome. The combined design, which is more efficient when there is more than one prototype to be compared, produces smaller d′ values and variance than the paired design for the same number of collected trials. In this sense the study gives insights on how results from one test design can be translated to another.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.