Abstract

In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.

Highlights

  • A Supramodal Neural Network for Speech and Gesture Semantics: An functional magnetic resonance imaging (fMRI) StudyDepartment of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany, 2 Department of Psychiatry and Psychotherapy, RWTH Aachen University, Aachen, Germany, 3 Department of Neurology, RWTH Aachen University, Aachen, Germany, 4 Department of Psychology, Durham University, Durham, United Kingdom

  • Comprehension of natural language is a complex capacity, depending on several cognitive and neural systems

  • FMRI results Analyses targeting at within-modality semantic processing showed that language-related semantics as revealed by the contrast [S+.S2] were processed in a mainly left-lateralized network encompassing an extended frontotemporal cluster as well as SMA in the left hemisphere and the right middle temporal gyrus (Table 1 and Figure 2a)

Read more

Summary

A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study

Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany, 2 Department of Psychiatry and Psychotherapy, RWTH Aachen University, Aachen, Germany, 3 Department of Neurology, RWTH Aachen University, Aachen, Germany, 4 Department of Psychology, Durham University, Durham, United Kingdom

Introduction
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call