Abstract

Previous studies on multisensory integration (MSI) of musical emotions have yielded inconsistent results. The distinct features of the music materials and different musical expertise levels of participants may account for that. This study aims to explore the neural mechanism for the audio-visual integration of musical emotions and infer the reasons for inconsistent results in previous studies by investigating the influence of the type of musical emotions and musical training experience on the mechanism. This fMRI study used a block-design experiment. Music excerpts were selected to express fear, happiness, and sadness, presented under audio only (AO) and audio-visual (AV) modality conditions. Participants were divided into two groups: one comprising musicians who had been musically trained for many years and the other non-musicians with no musical expertise. They assessed the type and intensity of musical emotion after listening to or watching excerpts. Brain regions related to MSI of emotional information and default mode network (DMN) are sensitive to sensory modality conditions and emotion-type changes. Participants in the non-musician group had more, and bilateral distribution of brain regions showed greater activation in the AV assessment stage. By contrast, the musician group had less and lateralized right-hemispheric distribution of brain regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call