Abstract

How the brain representation of conceptual knowledge varies as a function of processing goals, strategies and task-factors remains a key unresolved question in cognitive neuroscience. In the present functional magnetic resonance imaging study, participants were presented with visual words during functional magnetic resonance imaging (fMRI). During shallow processing, participants had to read the items. During deep processing, they had to mentally simulate the features associated with the words. Multivariate classification, informational connectivity and encoding models were used to reveal how the depth of processing determines the brain representation of word meaning. Decoding accuracy in putative substrates of the semantic network was enhanced when the depth processing was high, and the brain representations were more generalizable in semantic space relative to shallow processing contexts. This pattern was observed even in association areas in inferior frontal and parietal cortex. Deep information processing during mental simulation also increased the informational connectivity within key substrates of the semantic network. To further examine the properties of the words encoded in brain activity, we compared computer vision models—associated with the image referents of the words—and word embedding. Computer vision models explained more variance of the brain responses across multiple areas of the semantic network. These results indicate that the brain representation of word meaning is highly malleable by the depth of processing imposed by the task, relies on access to visual representations and is highly distributed, including prefrontal areas previously implicated in semantic control.

Highlights

  • Grounded models of semantic cognition propose that knowledge about the world is re-enacted in the same modality-specific brain systems that are involved in perceptual or action processes

  • We found that the advantage of computer vision models over word embedding models was higher in the deep processing condition relative to the shallow processing in the FFG, inferior parietal lobe (IPL), inferior temporal lobe (ITL), posterior cingulate gyrus (PCG), pars opercularis (POP) and pars orbitalis (POR)

  • The results clearly show that the decoding of word category information in most of the putative substrates of the semantic network [29] is consistently 14 higher during the mental simulation condition relative to a shallow processing condition in which participants merely read the words

Read more

Summary

Introduction

Grounded models of semantic cognition propose that knowledge about the world is re-enacted in the same modality-specific brain systems that are involved in perceptual or action processes. The perceptual symbols theory [2] proposes that co-activation patterns in sensory-motor substrates are critical On this account, conceptual knowledge involves an agent’s brain simulating the different properties of the object in question (e.g. shape, colour, texture, sound, action) in a way that resembles how the information is encoded in sensorimotor systems during overt behaviour. Different neurocognitive models of semantic knowledge indicate the role of ‘hub’ regions which are implicated in the integration of modality-specific (sensory) information and the formation of invariant conceptual representations [3]. Decoding studies indicate that the brain representation of semantic knowledge generalizes across words and images and is lateralized to the left hemisphere, involving the angular and intraparietal sulcus, and the posterior middle temporal gyrus [11]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call