Abstract
When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory–visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory–visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory–visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory–visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory–visual speech recognition performance, voicing, is often the cue that benefits least from amplification.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have