Abstract

To learn language, children must map variable input to categories such as phones and words. How do children process variation and distinguish between variable pronunciations ("shoup" for soup) versus new words? The unique sensory experience of children with cochlear implants, who learn speech through their device's degraded signal, lends new insight into this question. In a mispronunciation sensitivity eyetracking task, children with implants (N=33), and typical hearing (N=24; 36-66 months; 36F, 19M; all non-Hispanic white), with larger vocabularies processed known words faster. But children with implants were less sensitive to mispronunciations than typical hearing controls. Thus, children of all hearing experiences use lexical knowledge to process familiar words but require detailed speech representations to process variable speech in real time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call