Abstract

Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.

Highlights

  • Most neuroscientific theories of semantic representation hold that meanings are coded, at least in part, independently of the stimuli used to elicit them (Binder and Desai, 2011; Meteyard et al, 2012; Patterson et al, 2007; Rogers et al, 2004; Simmons and Barsalou, 2003)

  • Previous studies have found that a network of left-lateralised semantic processing regions represent word and object concepts in a way that generalises across diverse stimulus forms

  • We investigated whether such stimulus independence is a feature of the neural coding of event semantics

Read more

Summary

Introduction

Most neuroscientific theories of semantic representation hold that meanings are coded, at least in part, independently of the stimuli used to elicit them (Binder and Desai, 2011; Meteyard et al, 2012; Patterson et al, 2007; Rogers et al, 2004; Simmons and Barsalou, 2003) These theories propose, for example, that the same semantic representation for the concept DOG is engaged whether one reads the word “dog”, sees a canine in the park or hears the sound of barking. This position is most strongly associated with hub-and-spoke theories (Hoffman et al, 2018; Lambon Ralph et al, 2017; Rogers et al, 2004). This view is compatible with stimulus-independent representation, since the same simulations might be activated by a range of different stimuli

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call