Abstract

The superior temporal sulcus (STS) and gyrus (STG) are commonly identified to be functionally relevant for multisensory integration of audiovisual (AV) stimuli. However, most neuroimaging studies on AV integration used stimuli of short duration in explicit evaluative tasks. Importantly though, many of our AV experiences are of a long duration and ambiguous. It is unclear if the enhanced activity in audio, visual, and AV brain areas would also be synchronised over time across subjects when they are exposed to such multisensory stimuli. We used intersubject correlation to investigate which brain areas are synchronised across novices for uni- and multisensory versions of a 6-min 26-s recording of an unfamiliar, unedited Indian dance recording (Bharatanatyam). In Bharatanatyam, music and dance are choreographed together in a highly intermodal-dependent manner. Activity in the middle and posterior STG was significantly correlated between subjects and showed also significant enhancement for AV integration when the functional magnetic resonance signals were contrasted against each other using a general linear model conjunction analysis. These results extend previous studies by showing an intermediate step of synchronisation for novices: while there was a consensus across subjects' brain activity in areas relevant for unisensory processing and AV integration of related audio and visual stimuli, we found no evidence for synchronisation of higher level cognitive processes, suggesting these were idiosyncratic.

Highlights

  • 1.1 Audiovisual integration Day to day, we are exposed to a continuous stream of multisensory audio and visual stimulation

  • We found significant correlations of subjects’ time courses in the superior temporal gyrus (STG), occipital middle occipital gyrus (MOG), the left lingual gyrus and the right cuneus

  • Compared with V, audiovisual signals (AV) showed a reduction in the extension of significant correlation in the right MOG but a bilaterally greater extension in the extrastriate visual cortex consisting of the lingual gyrus, fusiform face area (FFA), and the cuneus

Read more

Summary

Introduction

1.1 Audiovisual integration Day to day, we are exposed to a continuous stream of multisensory audio and visual stimulation. Our brain integrates these sensory signals from different modalities into a coherent one. Others’ movements, gestures, and emotional expressions are combined with auditory signals, such as their spoken words, to create a meaningful perception. A good illustration for such a cross-modal integration of audiovisual signals (AV) is the McGurk effect (McGurk & MacDonald, 1976), where the resulting percept is a novel creation of the visual and auditory informa-. Most cases of AV integration are less spectacular. In those cases, the use of information from multiple sensory modalities enhances perceptual sensitivity, allowing more accurate judgments of experts and novices on particular parameters of the sensory stimulus The use of information from multiple sensory modalities enhances perceptual sensitivity, allowing more accurate judgments of experts and novices on particular parameters of the sensory stimulus (e.g. Arrighi, Marini, & Burr, 2009; Jola, Davis, & Haggard, 2011; Love, Pollick, & Petrini, 2012; Navarra & Soto-Faraco, 2005)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call