Abstract

Adaptation to the acoustic world after cochlear implantation requires significant adjustment to degraded auditory signals. The cognitive mechanisms underlying the mapping of such signals onto preexisting internal representations (postlingually deafened individuals), or the formation of novel internal representations (prelingually deafened individuals), are not well understood. Therefore, understanding the mechanisms of perceptual learning is critical to providing efficient training and (re)habilitation for new cochlear implant users. The advent of noise and sinewave vocoders to model the output of a cochlear implant speech processor has increased the tools available to investigate perceptual learning of speech in normal-hearing listeners. A fundamental question is whether training should focus exclusively on speech perception (synthetic approach), or whether training on extralinguistic or nonspeech auditory information (analytic approach) promotes more robust perceptual learning and generalization to novel materials and signals. In this talk, I will present data from two recent studies on the perceptual learning of sinewave-vocoded speech by normal-hearing participants that focused on different levels of linguistic/extralinguistic and nonspeech acoustic information. I will discuss the implications these data have for understanding perceptual learning and the cognitive mechanisms that mediate speech perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call