Abstract
Perceptual adaptation to a talker allows listeners to efficiently resolve inherent ambiguities present in the speech signal introduced by the lack of a one-to-one mapping between acoustic signals and intended phonemic categories across talkers. In ideal listening environments, preceding speech context enhances perceptual adaptation to a talker. However, little is known regarding how perceptual adaptation to speech occurs in more realistic listening environments with background noise. Here, we explored how talker variability and preceding speech context affect identification of phonetically confusable words in adverse listening conditions. With dependent variables of response time and threshold signal to noise ratio (SNR), our results showed that listeners were less accurate and slower in identifying mixed-talker speech compared to single-talker speech when target words were presented in multi-talker babble, and that preceding speech context enhanced word identification performance under noise both in single- and mixed-talker conditions. These results extend previous findings of perceptual adaptation to speech in quiet environments and suggest that two distinct mechanisms underlie perceptual adaptation to speech: rapid successful feedforward allocation of attention to salient talker-specific stimuli via auditory streaming, and an additional mechanism that preallocates cognitive resources to support processing of talker variability over longer time scales.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.