Abstract

Speech communication in a non-native language (L2) can feel effortful, and the present study suggests that this effort affects both auditory and lexical processing. EEG recordings (electroencephalography) were made from native English (L1) and Korean listeners while they listened to English sentences spoken with two accents (English and Korean) in the presence of a distracting talker. Neural entrainment (i.e., phase locking between the EEG recording and the speech amplitude envelope) was measured for target and distractor talkers. L2 listeners had relatively greater entrainment for target talkers than did L1 listeners, likely because their difficulty with L2 speech recognition caused them to focus more attention on the speech signal. N400 was measured for the final word in each sentence, and L2 listeners had greater lexical processing in high-predictability sentences than did L1 listeners. L1 listeners had greater target-talker entrainment when listening to the more difficult L2 accent than their own L1 accent, and similarly had larger N400 responses for the L2 accent. It thus appears that the increased effort of L2 listeners, as well as L1 listeners understanding L2 speech, modulates their auditory and lexical processing during speech recognition. This may provide a mechanism to compensate for their perceptual challenges under adverse conditions.

Highlights

  • Understanding speech in a non-native language (L2) can be effortful because one’s perceptual and linguistic representations are typically not fully tuned to the L2 (e.g., Flege, 1992; Iverson et al, 2003)

  • Cognitive load could be expected to interfere with L2 speech recognition; an unrelated visual search task can reduce L1 listeners’ reliance on acoustic detail in speech (Mattys, Brooks, & Cooke, 2009; Mattys & Palmer, 2015) as well as reduce auditory cortical responses to nonspeech tones (Molloy, Griffiths, Chait, & Lavie, 2015)

  • Listening effort can be thought of as facilitating speech perception, in that it allows L1 listeners to modulate their processing to fit the demands of the listening situation, both by enhancing their representation of the acoustic signal through greater focused attention (e.g., Ding & Simon, 2012) and searching more thoroughly among lexical competitors when the signal is thought to be less reliable (e.g., McQueen & Huettig, 2012)

Read more

Summary

Introduction

Understanding speech in a non-native language (L2) can be effortful because one’s perceptual and linguistic representations are typically not fully tuned to the L2 (e.g., Flege, 1992; Iverson et al, 2003). It is not clear what effects this additional listening effort and cognitive load have on the processes underlying L2 speech recognition. Cognitive load could be expected to interfere with L2 speech recognition; an unrelated visual search task can reduce L1 listeners’ reliance on acoustic detail in speech (Mattys, Brooks, & Cooke, 2009; Mattys & Palmer, 2015) as well as reduce auditory cortical responses to nonspeech tones (Molloy, Griffiths, Chait, & Lavie, 2015). Some of the additional effort and load experienced by L2 listeners may be a product of compensatory mechanisms that help overcome L2 perceptual and comprehension difficulties

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.