Abstract

Oral communication often takes place in noisy environments, which challenge spoken‐word recognition. Previous research has suggested that the presence of background noise extends the number of candidate words competing with the target word for recognition and that this extension affects the time course and accuracy of spoken‐word recognition. In this study, we further investigated the temporal dynamics of competition processes in the presence of background noise, and how these vary in listeners with different language proficiency (i.e., native and non‐native) using computational modeling. We developed ListenIN (Listen‐In‐Noise), a neural‐network model based on an autoencoder architecture, which learns to map phonological forms onto meanings in two languages and simulates native and non‐native spoken‐word comprehension. We also examined the model's activation states during online spoken‐word recognition. These analyses demonstrated that the presence of background noise increases the number of competitor words, which are engaged in phonological competition and that this happens in similar ways intra and interlinguistically and in native and non‐native listening. Taken together, our results support accounts positing a “many‐additional‐competitors scenario” for the effects of noise on spoken‐word recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.