Abstract

Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but these studies do not permit precise localisation of this function. The current investigation used multiple imaging methods in healthy participants to examine functional dissociations within ATL. Multi-voxel pattern analysis identified spatially segregated regions: a response to input modality in anterior superior temporal gyrus (aSTG) and a response to meaning in more ventral anterior temporal lobe (vATL). This functional dissociation was supported by resting-state connectivity that found greater coupling for aSTG with primary auditory cortex and vATL with the default mode network. A meta-analytic decoding of these connectivity patterns implicated aSTG in processes closely tied to auditory processing (such as phonology and language) and vATL in meaning-based tasks (such as comprehension or social cognition). Thus we provide converging evidence for the segregation of meaning and input modality in the ATL.

Highlights

  • Current neurocognitive models propose that concepts are represented in a large-scale distributed network comprising (1) sensory and motor ‘spoke’ regions that store knowledge of physical features and (2) convergence zones that integrate across multiple modalities to form abstract amodal representations (Damasio, 1989; Patterson et al, 2007)

  • The hub and spoke model of Patterson et al (2007) proposes that information from modality-specific spoke regions is integrated in an amodal ‘hub’ region within the anterior temporal lobes (ATL), allowing the conceptual similarity of items that are semantically similar yet share few surface features, such as ‘flute’ and ‘violin’, to be represented, and making it possible to map between modalities so that we can picture a flute and imagine the sound that it makes from only its name (e.g., Damasio, 1989; Lambon et al, 2010; Patterson et al, 2007; Rogers et al, 2004)

  • To provide a better understanding of the neural architecture that supported the functional distinction between anterior superior temporal gyrus (aSTG) and anterior inferior temporal gyrus (aITG), we explored the connectivity of these regions in resting state fMRI by placing spherical ROIs at peaks in the Multivoxel pattern analysis (MVPA) analysis

Read more

Summary

Introduction

Current neurocognitive models propose that concepts are represented in a large-scale distributed network comprising (1) sensory and motor ‘spoke’ regions that store knowledge of physical features and (2) convergence zones that integrate across multiple modalities (e.g., visual vs. auditory) to form abstract amodal representations (Damasio, 1989; Patterson et al, 2007). The hub and spoke model of Patterson et al (2007) proposes that information from modality-specific spoke regions is integrated in an amodal ‘hub’ region within the anterior temporal lobes (ATL), allowing the conceptual similarity of items that are semantically similar yet share few surface features, such as ‘flute’ and ‘violin’, to be represented, and making it possible to map between modalities so that we can picture a flute and imagine the sound that it makes from only its name (e.g., Damasio, 1989; Lambon et al, 2010; Patterson et al, 2007; Rogers et al, 2004). These neural regions are important for perception and action, they are recruited during semantic processing to provide meaning to words (Barsalou, 1999; 2008; Martin, 2007; Patterson, et al, 2007; Kiefer and Pulvermuller, 2012)

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call