Abstract

Speech recognition in adverse listening conditions is more difficult for non-native listeners than native listeners. Previous work in our laboratories found that Chinese-native listeners with native English exposure may improve the use of temporal cues of noise for English vowel identification. The purpose of this study was to investigate whether such benefits of using temporal modulation in noise were also presented in sentence recognition. IEEE sentence recognition in quiet, stationary, and temporally modulated noise were measured for American English native (EN) listeners, Chinese-native listeners in the United States (CNU), and Chinese-native listeners in China (CNC). Results showed that in general, EN listeners outperformed the two groups of CN listeners in quiet and noise, while CNU listeners had better scores of sentence recognition than CNC listeners. Moreover, at low SNRs, the masking release on sentence recognition from the temporal modulation in noise was greatest for EN listeners, and the smallest for CNC listeners with the CNU listeners in between, while at middle and high SNRs, there was no significant group effect on the masking release, consistent with the findings of English vowel perception. The group difference in using temporal modulation of noise may be associated with acoustic differences between Chinese and English speech.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.