Abstract

Hearing aids (HAs) are the primary method for treating hearing impairment. However, in complex environments with competing sound sources, HAs provide marginal benefits at best. Under these conditions, clinicians recommend facing the speaker to extract visual speech information. Combined auditory-visual (AV) speech generally provides a signal that is much more resistant to noise and reverberation than an auditory-only signal. The Articulation Index (AI) established that different frequency regions of speech vary in their degree of importance for intelligibility. However, frequencies most important for auditory-only speech intelligibility differ from those that are most important for AV speech intelligibility. Thus, the optimal signal-processing solution may differ between AV and auditory-only conditions. Braida and colleagues sought to develop an AV version of the AI to enable HA signal-processing strategies to be compared without the time and expense required for behavioral testing. This presentation desc...

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.