Abstract

Linguistic labels are known to facilitate object recognition, yet the mechanism of this facilitation is not well understood. Previous psychophysical studies have suggested that words guide visual perception by activating information about visual object shape. Here we aimed to test this hypothesis at the neural level, and to tease apart the visual and semantic contribution of words to visual object recognition. We created a set of object pictures from two semantic categories with varying shapes, and obtained subjective ratings of their shape and category similarity. We then conducted a word-picture matching experiment, while recording participants’ EEG, and tested if the shape or the category similarity between the word’s referent and target picture explained the spatiotemporal pattern of the picture-evoked responses. The results show that hearing a word activates representations of its referent’s shape, which interacts with the visual processing of a subsequent picture within 100 ms from its onset. Furthermore, non-visual categorical information, carried by the word, affects the visual processing at later stages. These findings advance our understanding of the interaction between language and visual perception and provide insights into how the meanings of words are represented in the brain.

Highlights

  • Humans possess the unique ability to label objects

  • We found a robust correlation between reaction times and subjective shape similarity (mean correlation between individual subjective shape similarity and reaction times M = 0.36 ± 0.22, significantly different from zero across subjects t(19) = 6.95, p < 0.001, d = 1.55)

  • The reaction times were not correlated with the category similarity ratings (M = 0.007 ± 0.16, t(19) = 0.18, n.s.)

Read more

Summary

Introduction

Humans possess the unique ability to label objects. How does this ability transform cognition and perception? This question goes to the core of what it means to be human. The word-picture congruency was predicted from the P1 latency on a trial-by-trial basis, but only in the label-cued trials These results indicate that verbal cues provide top-down guidance on visual perception, and change how subsequently incoming visual information is processed early on. People show a substantial bias in orienting toward semantically related objects, e.g. a picture of socks after hearing the target word “belt”. These observations have led to the cascaded model of visual-linguistic interactions[26,27,28,29], which suggests that words evoke both visual and semantic representations.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call