Abstract

Speech intelligibility depends on factors related to the auditory processes involved in sound perception as well as on the acoustic properties of the sound entering the ear. A clear understanding of speech perception in complex acoustic conditions remains a challenge. Here, a computational modeling framework is presented that attempts to predict the speech intelligibility obtained by normal-hearing and hearing-impaired listeners in various adverse conditions. The model combines the concept of envelope frequency selectivity in the auditory processing of the sound with a decision metric that is based either on the signal-to-noise envelope power ratio or a correlation measure. The proposed model is able to account for the effects of stationary background noise, reverberation, nonlinear distortions and noise reduction processing on speech intelligibility. However, due to its simplified auditory preprocessing stages, the model fails to account for the consequences of individual hearing loss on intelligibility. To address this, physiologically inspired extensions of the auditory preprocessing in the model are combined with the modulation-frequency selective processing and the back-end processing that have been successful in the conditions tested with normal-hearing listeners. The goal is to disentangle the consequences of different types of hearing deficits on speech intelligibility in a given acoustic scenario.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call