Abstract

Recent years have seen a growing interest within the natural language processing (NLP) community in evaluating the ability of semantic models to capture human meaning representation in the brain. Existing research has mainly focused on applying semantic models to decode brain activity patterns associated with the meaning of individual words, and, more recently, this approach has been extended to sentences and larger text fragments. Our work is the first to investigate metaphor processing in the brain in this context. We evaluate a range of semantic models (word embeddings, compositional, and visual models) in their ability to decode brain activity associated with reading of both literal and metaphoric sentences. Our results suggest that compositional models and word embeddings are able to capture differences in the processing of literal and metaphoric sentences, providing support for the idea that the literal meaning is not fully accessible during familiar metaphor comprehension.

Highlights

  • Distributional semantics aims to represent the meaning of linguistic fragments as high-dimensional dense vectors

  • Jain and Huth (2018) investigated long short-term memory (LSTM) recurrent neural networks and showed that semantic models that incorporate larger-sized context windows outperform those with smaller-sized context windows, as well as the baseline bagof-words model, in predicting brain activity associated with narrative listening

  • To contribute to our understanding of metaphor comprehension, including the accessibility of the literal meaning, we investigate whether semantic models are able to decode patterns of brain activity associated with literal and metaphoric sentence comprehension, using the functional magnetic resonance imaging (fMRI) dataset of Djokic et al

Read more

Summary

Introduction

Distributional semantics aims to represent the meaning of linguistic fragments as high-dimensional dense vectors. Recent research has demonstrated the ability of distributional models to predict patterns of brain activity associated with the meaning of words, obtained via functional magnetic resonance imaging (fMRI) (Mitchell et al, 2008; Devereux et al, 2010; Pereira et al, 2013) Following in their steps, Anderson et al (2017b) have investigated visually grounded semantic models in this context. Jain and Huth (2018) investigated long short-term memory (LSTM) recurrent neural networks and showed that semantic models that incorporate larger-sized context windows outperform those with smaller-sized context windows, as well as the baseline bagof-words model, in predicting brain activity associated with narrative listening. We investigate the extent to which lexical and compositional semantic models are able to capture differences in human meaning representations, resulting from meaning disambiguation of literal and metaphoric uses of words in context

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call