Abstract

The present study investigated how single-talker and babble maskers affect auditory and lexical processing during native (L1) and non-native (L2) speech recognition. Electroencephalogram (EEG) recordings were made while L1 and L2 (Korean) English speakers listened to sentences in the presence of single-talker and babble maskers that were colocated or spatially separated from the target. The predictability of the sentences was manipulated to measure lexical-semantic processing (N400), and selective auditory processing of the target was assessed using neural tracking measures. The results demonstrate that intelligible single-talker maskers cause listeners to attend more to the semantic content of the targets (i.e., greater context-related N400 changes) than when targets are in babble, and that listeners track the acoustics of the target less accurately with single-talker maskers. L1 and L2 listeners both modulated their processing in this way, although L2 listeners had more difficulty with the materials overall (i.e., lower behavioral accuracy, less context-related N400 variation, more listening effort). The results demonstrate that auditory and lexical processing can be simultaneously assessed within a naturalistic speech listening task, and listeners can adjust lexical processing to more strongly track the meaning of a sentence in order to help ignore competing lexical content.

Highlights

  • Speech perception in everyday noisy situations is complex because these contexts put simultaneous demands on multiple levels of processing

  • The results demonstrate that intelligible single-talker maskers cause listeners to attend more to the semantic content of the targets than when targets are in babble, and that listeners track the acoustics of the target less accurately with single-talker maskers

  • The results demonstrate that auditory and lexical processing can be simultaneously assessed within a naturalistic speech listening task, and listeners can adjust lexical processing to more strongly track the meaning of a sentence in order to help ignore competing lexical content

Read more

Summary

Introduction

Speech perception in everyday noisy situations (e.g., parties or restaurants) is complex because these contexts put simultaneous demands on multiple levels of processing. Noises mask the acoustic information of a speaker at the auditory periphery, the listener must perceptually track the variable acoustics of the speaker’s voice through a background of similar speakers from multiple spatial locations, and the listener must follow the meaning of the conversation while ignoring what other people are saying (e.g., Brungart, 2001; Shinn-Cunningham, 2008; Cooke et al, 2008) This situation becomes more difficult when understanding speech in a non-native (L2) language; noise may have a greater effect on L2 listeners because their perceptual and linguistic processes are not as well developed for their L2 (see Lecumberri et al, 2010, for a review), and it is possible that the perceptual and cognitive demands of L2 speech communication reduce the spare capacity to focus attention in difficult listening conditions (e.g., Kahneman, 1973; Pichora-Fuller et al, 1995; McCoy et al, 2005). There is a broader sense in which the magnitude of the N400 can be used as a measure of lexical-semantic effort during word recognition

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call