Abstract

Audio-visual neural interaction was examined by using ERPs. Eleven male volunteers participated in this study. EEGs were recorded from 19 locations. Japanese vowels (/a/ or /i/) or white-noise (/noise/) were used as auditory stimuli. The face images pronouncing vowel ([a] or [i]) were used as visual stimuli. This study was composed of three conditions, i.e., (1) audio-visual condition (AV condition) due to bimodal stimulus presentation, (2) auditory condition (A condition) due to only auditory stimulus presentation, and (3) visual condition (V condition) due to only visual stimulus presentation. In the AV condition, audio-visual stimuli pairs were phonetically congruent (audio/a/, visual[a]), incongruent (audio/a/, visual[i]), and deviant (/noise/, visual[a]). The participants were instructed to press a button for vowel/a/ or /noise/ stimulus. Audio-visual interaction was examined by subtracting ERPs in the A or V condition from ERPs in the AV condition. Cross-modal facilitatory effects were not observed in auditory perception. On the other hand, topographical changes occurred in face-specific negative components around 170 ms depending on audio and visual informational congruency, i.e., the center of negative activity shifted towards the left hemisphere for “incongruent” stimulus. This result might be concerned with suppression of incongruent visual information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call