Abstract
Over the course of a lifetime, listeners are likely to encounter many unfamiliar talkers speaking the listeners’ native language. Despite the fact that talkers may differ radically in the way they sound, listeners normally learn to understand the novel speech patterns of unfamiliar talkers without difficulty. Part of learning to understand an unfamiliar talker speaking a familiar language is learning to relate novel acoustic patterns in the unfamiliar speech to existing mental representations of familiar phonetic categories. This process has frequently been described in terms of modifying an existing psychological ‘‘space’’ of mental representations by shifting the focus or weight of attention to linguistically useful aspects of the speech signal. In the experiment reported here, native English listeners were trained to better understand an unfamiliar talker—a computer speech synthesizer. Training to recognize and transcribe English words resulted in a significant increase in listeners’ abilities to understand words and to identify consonants. Training also strongly influenced the distribution of attention to particular features of the speech signal, as indicated by changes in listeners’ judgments of the similarity of consonants after training. Implications for theories of perceptual learning and phonetic categorization will be discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.