Abstract

Recent evidence from neurophysiological and functional imaging research has demonstrated that semantically congruent sounds can modulate the identification of a degraded visual object. However, it remains unclear how different integration regions interact with each other when only a visual object was obscured. The present study aimed to elucidate the neural bases of cross-modal functional interactions in degraded visual object recognition. Naturally degraded images and semantically congruent sounds were used in our experiment. Participants were presented with three different modalities of audio-visual stimuli: auditory only (A), degraded visual only (Vd), and simultaneous auditory and degraded visual (AVd). We used conjunction analysis and the classical 'max criterion' to define three audiovisual integration cortical hubs: the visual association cortex, the superior temporal sulcus and the Heschl's gyrus. Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. The DCM results revealed that the modulation of an auditory stimulus resulted in increased connectivity from the Heschl's gyrus to the visual association cortex and from the superior temporal sulcus to the visual association cortex. It was shown that the visual association cortex is modulated not only via feedback and top-down connections from higher-order convergence areas but also via lateral feedforward connectivity from the auditory cortex. The present findings give support to interconnected models of cross-modal information integration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call