Abstract

Noise makes speech perception much more challenging for non-native listeners than for native listeners. Training for non-native speech perception is usually implemented in quiet. It remains unclear if background noise may benefit or hamper non-native speech perception learning. In this study, 51 Chinese-native listeners were randomly assigned into three groups, including vowel training in quiet (TIQ), vowel training in noise (TIN), and watching videos in English as an active control. Vowel identification was assessed before (T1), right after (T2), and three months after training (T3) in quiet and various noise conditions. Results indicated that compared with the video watching group, the TIN group improved vowel identification in both quiet and noise significantly more at T2 and at T3. In contrast, the TIQ group improved significantly more in quiet and also in non-speech noise conditions at T2, but the improvement did not hold at T3. Moreover, compared to the TIQ group, the TIN group showed significantly less informational masking at both T2 and T3 and less energetic masking at T3. These results suggest that L2 speech training in background noise may improve non-native vowel perception more effectively than TIQ background only. The implications for non-native speech perception learning are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call