Abstract

Cochlear implant (CI) users achieve significant levels of speech recognition on average, but cross-listener variability is very high. Unfortunately, our understanding of the speech perception mechanisms employed by CI users is still incomplete. To address this issue we have developed the multidimensional phoneme identification (MPI) model, which aims to predict phoneme identification for individual cochlear implant users based on their discrimination along specified acoustic dimensions. The MPI model was used to fit vowel confusion matrices from English- and Spanish-speaking CI users. Good agreement between predicted and observed matrices was obtained for both English and Spanish. Some of the acoustic dimensions required to obtain these fits were the same for both languages (e.g., F1 and F2), but others were not (e.g., F3 is required to obtain a good fit in English, but not in Spanish). These results are consistent with differences in the acoustic phonetics of the two languages: a low value of F3 is used in English to encode the retroflex vowel /r/, and this sound does not exist in Spanish. These results raise the possibility that the optimal stimulation strategies may differ across languages. [Work supported by NIDCD (R01-DC03937), NOHR, DRF, and BID/CONICYT (Uruguay).]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call