Abstract

In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny = visual features) or different modalities (e.g., silver, loud = visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4–6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.

Highlights

  • The embodied framework of language suggests that lexicalsemantic knowledge is stored in part in modality-specific networks that are distributed across the cortex [1,2,3,4]

  • The current study extends this body of literature by addressing the question of how distributed lexical-semantic features are integrated during word comprehension

  • Source reconstruction of this effect revealed a major peak in left anterior temporal lobe (ATL) as well as left middle occipital gyrus (MOG) (Figure 6A)

Read more

Summary

Introduction

The embodied framework of language suggests that lexicalsemantic knowledge (i.e., word meaning) is stored in part in modality-specific networks that are distributed across the cortex [1,2,3,4]. Ample evidence for the link between word meaning and perception/action systems exists, the bulk of research in this field has reduced lexical-semantic information to one dominant modality (e.g., vision for red and action for kick). Words clearly refer to items that are experienced through multiple modalities in the real world (e.g., a football is associated with both a specific visual form and a specific action), and embodied accounts of language have done little to address how multimodal information interacts during the processing of word meaning. Van Dam and colleagues [10] demonstrated that words denoting objects that are strongly associated with both action and visual information (e.g., tennis ball) reliably activate both motor and visual pathways in the cortex. Little is known about the causal role of modality-specific networks in lexical-semantic processing, and how they are related to more abstract semantic knowledge [11,12]

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.