Abstract

Speech comprehension is significantly improved by visual input on the speaker's mouth movements. Audiovisual integration underlying this phenomenon is often studied in EEG experiments in which the event related brain potential (ERP) elicited by a bimodal stimulus is compared to the sum of ERPs triggered by auditory and visual signals of the same source. However, this method leads to spurious results in time ranges when ERP components common to all these stimulus types are present. A method that aims to filter out such common early anticipatory potentials is data high-pass filtering. In the present study, first, we demonstrated that subtle changes in filter cut-off frequency lead to remarkably different results on the interaction effect so that no reliable conclusion on the spatial distribution of the interaction could be drawn. Second, we suggested a different approach for the investigation of ERP correlates of audiovisual integration: bimodal syllables modified by light temporal asynchrony were presented to subjects and ERPs correlating with the fused and unfused perceptions were compared. We found that components corresponding to both auditory N1 and P2 waves were smaller in case of the fused perception, supporting the view that N1 and P2 generator activities are suppressed during multimodal speech perception. The N1 effect showed a clearly right hemisphere dominance while the effect around the P2 peak was most pronounced on centroparietal electrodes and dominated over the left hemisphere.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call