Abstract

In the brain, the semantic system is thought to store concepts. However, little is known about how it connects different concepts and infers semantic relations. To address this question, we collected hours of functional magnetic resonance imaging data from human subjects listening to natural stories. We developed a predictive model of the voxel-wise response and further applied it to thousands of new words. Our results suggest that both semantic categories and relations are represented by spatially overlapping cortical patterns, instead of anatomically segregated regions. Semantic relations that reflect conceptual progression from concreteness to abstractness are represented by cortical patterns of activation in the default mode network and deactivation in the frontoparietal attention network. We conclude that the human brain uses distributed networks to encode not only concepts but also relationships between concepts. In particular, the default mode network plays a central role in semantic processing for abstraction of concepts.

Highlights

  • In the brain, the semantic system is thought to store concepts

  • Similar to a prior work[1], we developed a predictive model of human functional magnetic resonance imaging responses given >11 h of natural story stimuli

  • We found that the whole-part relation was represented by a cortical pattern that manifested itself as the co-occurring activation of the DMN32 and deactivation of the frontoparietal network[33,34] (FPN, including LPFC, IPC, and posterior middle temporal gyrus (pMTG)) (Fig. 6b)

Read more

Summary

Introduction

The semantic system is thought to store concepts. little is known about how it connects different concepts and infers semantic relations. 1234567890():,; Humans can describe the potentially infinite features of the world and communicate with others using a finite number of words To make this possible, our brains need to encode semantics[1], infer concepts from experiences[2], relate one concept to another[3,4], and learn new concepts[5]. Similar to a prior work[1], we developed a predictive model of human functional magnetic resonance imaging (fMRI) responses given >11 h of natural story stimuli In this model, individual words and their pairwise relationships were both represented as vectors in a continuous semantic space[21], which was learned from a large corpus and was linearly mapped onto the brain’s semantic system. The voxel-wise encoding model was estimated based on the fMRI data concatenated across all stories and subjects

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call