Abstract

Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.

Highlights

  • Since the work of Mitchell et al (2008), there has been increasing interest in using computational semantic models to interpret neural activity patterns scanned as participants engage in conceptual tasks

  • Different modelling approaches — predominantly distributional semantic models (Mitchell et al, 2008; Devereux et al, 2010; Murphy et al, 2012; Pereira et al, 2013; Carlson et al, 2014) and semantic models based on human behavioural estimation of conceptual features (Palatucci et al, 2009; Sudre et al, 2012; Chang et al, 2010; Bruffaerts et al, 2013; Fernandino et al, 2015) — have elucidated how different brain regions contribute to semantic representation of concrete nouns; how these results extend to non-concrete nouns is unknown

  • We extend previous work by applying image- and text-based computational semantic models to decode an functional Magnetic Resonance Imaging (fMRI) data set spanning a diverse set of nouns of varying concreteness

Read more

Summary

Introduction

Since the work of Mitchell et al (2008), there has been increasing interest in using computational semantic models to interpret neural activity patterns scanned as participants engage in conceptual tasks. Andrews et al (2009) demonstrated that multi-modal models formed by combining text-based distributional information with behaviourally generated conceptual properties (as a surrogate for perceptual experience) provide a better proxy for human-like intelligence. Both the text-based and behaviourallybased components of their model were derived from linguistic information. Anderson et al (2015) have demonstrated that visually grounded models describe brain activity associated with internally induced visual features of objects as the ob-

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call