Abstract
The easiest and most effective way to improve speech recognition for hearing‐impaired individuals, or for normal‐hearing individuals listening in noisy or reverberant environments, is to have them watch the talker’s face. Auditory‐visual (AV) speech recognition has been shown consistently to be better than either hearing alone or speechreading alone for all but the most profoundly hearing‐impaired individuals. When AV recognition is less than perfect, several factors need to be considered. The most obvious of these are poor auditory (A) and poor visual (V) speech recognition skills. However, even when differences in unimodal skill levels are taken into account, differences among individual AV recognition scores persist. At least part of these individual differences may be attributable to differing abilities to integrate A and V cues. Unfortunately, there is no widely accepted measure of AV integration ability. Recent models of AV integration offer a quantitative means for estimating individual integration abilities for phoneme recognition. In this study, we compare several possible integration measures, along with model predictions, using both congruent and discrepant AV phoneme and sentence recognition tasks. The focus of this talk will address the need for independent measures of AV integration for individual subjects. [Work supported by NIH Grant DC00792.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.