Abstract

The ability to combine individual concepts of objects, properties, and actions into complex representations of the world is often associated with language. Yet combinatorial event-level representations can also be constructed from nonverbal input, such as visual scenes. Here, we test whether the language network in the human brain is involved in and necessary for semantic processing of events presented nonverbally. In Experiment 1, we scanned participants with fMRI while they performed a semantic plausibility judgment task versus a difficult perceptual control task on sentences and line drawings that describe/depict simple agent-patient interactions. We found that the language network responded robustly during the semantic task performed on both sentences and pictures (although its response to sentences was stronger). Thus, language regions in healthy adults are engaged during a semantic task performed on pictorial depictions of events. But is this engagement necessary? In Experiment 2, we tested two individuals with global aphasia, who have sustained massive damage to perisylvian language areas and display severe language difficulties, against a group of age-matched control participants. Individuals with aphasia were severely impaired on the task of matching sentences to pictures. However, they performed close to controls in assessing the plausibility of pictorial depictions of agent-patient interactions. Overall, our results indicate that the left frontotemporal language network is recruited but not necessary for semantic processing of nonverbally presented events.

Highlights

  • We demonstrate that left hemisphere language regions are active during the semantic processing of events shown as pictures, the semantic processing of events shown as sentences elicits a stronger response

  • We further show that the language network is not essential for nonverbal event semantics, given that the two individuals with global aphasia, who lack most of their left hemisphere language network, can still evaluate the plausibility of visually presented events

  • Our study advances the field in three ways: (i) it explores relational semantic processing in the domain of events, moving beyond the semantics of single objects—the focus of most prior neuroscience work on conceptual processing; (ii) it evaluates neural overlap between verbal and nonverbal semantics in fMRI at the level of individual participants; and (iii) it provides causal evidence in support of a dissociation between language and nonverbal event semantics

Read more

Summary

Introduction

Many thinkers have argued for an intimate relationship between language and thought, in fields as diverse as philosophy (Carruthers, 2002; Davidson, 1975; Wittgenstein, 1961), psychology (Sokolov, 1972; Vygotsky, 2012; Watson, 1920), linguistics (Berwick & Chomsky, 2016; Bickerton, 1990; Chomsky, 2007; Hinzen, 2013; Jackendoff, 1996), and artificial intelligence (Brown et al, 2020; Goldstein & Papert, 1977; Turing, 1950; Winograd, 1976) According to such accounts, language enables us to access our vast knowledge of objects, properties, and actions—often referred to as semantic knowledge—and flexibly combine individual semantic units to produce complex situation-specific representations called thoughts. Global aphasia: A severe form of language impairment caused by damage to the language network, resulting in substantial impairments in both production and comprehension

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call