In the present study, we aimed to investigate whether long-term music training could improve audio-visual speech integration in Chinese, using event-related brain potential (ERP) measurements. Specifically, we recruited musicians and non-musicians to participate in our experiment where visual Chinese characters were presented simultaneously with congruent or incongruent speech sounds. In order to maintain participants' focus on both auditory and visual modalities, they were instructed to perform a probe detection task. Our study revealed that for the musicians, audiovisual incongruent stimuli elicited larger N1 and N400 amplitudes compared to audiovisual congruent stimuli. Conversely, for the non-musicians, only a larger N400 amplitude was observed for incongruent stimuli relative to congruent stimuli, without a significant difference in N1 amplitude. Furthermore, correlation analyses indicated that more years of music training was associated with a larger N1 effect for the musicians. These results suggest that musicians were capable of detecting character-speech sound incongruence at an earlier time window compared to non-musicians. Overall, our findings provide compelling evidence that music training is associated with better integration of visual characters and auditory speech sounds in language processing.
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access