Abstract

An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

Highlights

  • Previous studies have mainly focused on crossmodal attention and explored how crossmodal attention modulates audiovisual sensory integration across various stages

  • By applying a multivariate pattern analysis (MVPA) method to the collected functional magnetic resonance imaging (fMRI) data, we directly assessed the encoded semantic information of the emotion and gender features of the stimuli and analyzed the functional connectivity between the brain areas encoding a semantic feature and the heteromodal brain areas associated with audiovisual integration, the pSTS/MTG and perirhinal cortex[18,19,20,21]

  • To confirm that audiovisual sensory integration occurred for each experimental task and determine the heteromodal areas associated with audiovisual integration, we performed voxel-wise group analysis of the fMRI data based on a mixed-effect two-level general linear model (GLM) in SPM8

Read more

Summary

Introduction

Previous studies have mainly focused on crossmodal attention and explored how crossmodal attention modulates audiovisual sensory integration across various stages. The neural representations of gender and emotion features were analyzed by comparing the reproducibility ratios or decoding accuracy rates for different stimulus conditions (visual-only, auditory-only, and audiovisual) and experimental tasks (number, gender, emotion, and bi-feature).

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call