Abstract

A common phenomenon reported by experienced, late learners of a second language (L2) is that comprehending L2 speech, especially under non-optimal conditions (e.g., in noisy rooms, when driving, when the speaker talks rapidly) is more effortful than processing L1 input. A variety of paradigms have documented this phenomenon experimentally. Many current theories of L1 and L2 speech perception invoke concepts of learned (language-specific) patterns of selective attention or attunement to characterize the processes by which native speakers rapidly and efficiently extract sufficient phonetic information from complex acoustic signals in order to recover phonological sequences (words or word-forms). It is suggested here that there are two modes of language-specific speech processing available to adult listeners, a phonetic mode (requiring attentional resources) and a phonological mode (automatic), that are tapped in the laboratory to different degrees as a function of complex interactions of subject’s linguistic experience, stimulus characteristics, and task demands. Exemplary experiments of L1 and L2 listener’s perception, using perceptual assimilation (cross-language identification) and speeded discrimination tasks, as well as electrophysiological indices of discrimination, illustrate some of these interactions, within the framework of the automatic selective perception model of speech perception. [Work supported by NIH-NIDCD and NSF.]

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.