Abstract

Ten healthy volunteers took part in this event-related potential (ERP) study aimed at examining the electrophysiological correlates of the cross-modal audio–visual interactions in an identification task. Participants were confronted either to the simultaneous presentation of previously learned faces and voices (audio–visual condition; AV), either to the separate presentation of faces (visual, V) or voices (auditive, A). As expected, an interference effect of audition on vision was observed at a behavioral level, as the bimodal condition was performed more slowly than the visual condition. At the electrophysiological level, the subtraction (AV − (A + V)) gave prominence to three distinct cerebral activities: (1) a central positive/posterior negative wave around 110 ms, (2) a central negative/posterior positive wave around 170 ms, AND (3) a central positive wave around 270 ms. These data suggest that cross-modal cerebral interactions could be independent of behavioral facilitation or interference effects. Moreover, the implication of unimodal and multisensory convergence regions in these results, as suggested by a source localization analysis, is discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.