In an earlier report, Clement et al. [J. Acoust. Soc. Am. 107, 2887 (2000)] showed that listeners’ confidence ratings of the identity of audiovisual syllables produced with conflicting auditory and visual cues were typically lower than those for audiovisually congruent syllables. This occurred even when both types of syllables were labeled consistently as the same phoneme. Confidence ratings varied across listeners, suggesting a graded nature of auditory–visual integration even across listeners who exhibit visual bias. In the current experiment, listeners were asked to identify audiovisual tokens from two talkers who elicit a high proportion of visually biased responses [Carney et al., J. Acoust. Soc. Am. 106, 2270 (1999)] and to rate their confidence in their labeling judgment. Only listeners who exhibited strong visual bias were tested in a discrimination task in which a true audiovisual /di/ was a standard. Comparison stimuli were visual /gi/-auditory /bi/, visual /bi/-auditory /gi/, and true audiovisual /bi/, /di/, and /gi/. Listeners were able to discriminate the visually biased and true /di/ stimuli while labeling them both as alveolar tokens. Results support the notion of a graded perception of bimodal stimuli. [Research supported by NIDCD.]
Read full abstract