Abstract

Deaf listeners with cochlear implants (CIs) achieve significant levels of speech recognition, but their performance range remains wide. Unfortunately, our understanding of the speech perception mechanisms employed by CI users is still incomplete. In particular, we do not know the exact combination of acoustic cues that are employed by CI users to understand speech, nor do we understand how sensory information is represented and combined, and how that information is used to perform speech identification. We have attempted to address this issue by developing mathematical models (Multidimensional Phoneme Identification or MPI models) that aim to predict phoneme identification for individual cochlear implant users based on their discrimination along specified acoustic dimensions. Mathematically, the MPI model is a multidimensional extension of the Durlach–Braida model of loudness perception. The MPI model can explain most of the vowel pairs or consonant pairs that should be more frequently confused by groups of CI users. In this presentation we will discuss individual data suggesting that speech perception by CI users may be limited by two kinds of factors: psychophysical (i.e., limited jnd’s along relevant acoustic dimensions) and cognitive (related to imperfect integration of different acoustic cues). [Work supported by NIDCD (R01-DC03937), NOHR and DRF.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call