Abstract

Language is a remarkable cognitive ability that can be expressed through visual (written language) or auditory (spoken language) modalities. When visual characters and auditory speech convey conflicting information, individuals may selectively attend to either one of them. However, the dominant modality in such a competing situation and the neural mechanism underlying it are still unclear. Here, we presented participants with Chinese sentences in which the visual characters and auditory speech convey conflicting information, while behavioral and electroencephalographic (EEG) responses were recorded. Results showed a prominent auditory dominance when audio-visual competition occurred. Specifically, higher accuracy (ACC), larger N400 amplitudes and more linkages in the posterior occipital-parietal areas were demonstrated in the auditory mismatch condition compared to that in the visual mismatch condition. Our research illustrates the superiority of the auditory speech over the visual characters, extending our understanding of the neural mechanisms of audio-visual competition in Chinese.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.