Abstract

To explain how the human brain represents and organizes meaning, many theoretical and computational language models have been proposed over the years, varying in their underlying computational principles and in the language samples based on which they are built. However, how well they capture the neural encoding of lexical semantics remains elusive. We used representational similarity analysis (RSA) to evaluate to what extent three models of different types explained neural responses elicited by word stimuli: an External corpus-based word2vec model, an Internal free word association model, and a Hybrid ConceptNet model. Semantic networks were constructed using word relations computed in the three models and experimental stimuli were selected through a community detection procedure. The similarity patterns between language models and neural responses were compared at the community, exemplar, and word node levels to probe the potential hierarchical semantic structure. We found that semantic relations computed with the Internal model provided the closest approximation to the patterns of neural activation, whereas the External model did not capture neural responses as well. Compared with the exemplar and the node levels, community-level RSA demonstrated the broadest involvement of brain regions, engaging areas critical for semantic processing, including the angular gyrus, superior frontal gyrus and a large portion of the anterior temporal lobe. The findings highlight the multidimensional semantic organization in the brain which is better captured by Internal models sensitive to multiple modalities such as word association compared with External models trained on text corpora.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call