Abstract

Language is a remarkable cognitive ability that can be expressed through visual (written language) or auditory (spoken language) modalities. When visual characters and auditory speech convey conflicting information, individuals may selectively attend to either one of them. However, the dominant modality in such a competing situation and the neural mechanism underlying it are still unclear. Here, we presented participants with Chinese sentences in which the visual characters and auditory speech convey conflicting information, while behavioral and electroencephalographic (EEG) responses were recorded. Results showed a prominent auditory dominance when audio-visual competition occurred. Specifically, higher accuracy (ACC), larger N400 amplitudes and more linkages in the posterior occipital-parietal areas were demonstrated in the auditory mismatch condition compared to that in the visual mismatch condition. Our research illustrates the superiority of the auditory speech over the visual characters, extending our understanding of the neural mechanisms of audio-visual competition in Chinese.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call