Abstract

Human speech comprehension is remarkable for its immediacy and rapidity. The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge. Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., "eat the apple"). These combinatorial processes form the backbone for incremental interpretation, enabling listeners to integrate the meaning of each word as it is heard into their dynamic interpretation of the current utterance. Focusing on the verb-DO noun relationship in simple spoken sentences, we applied multivariate pattern analysis and computational semantic modeling to source-localized electro/magnetoencephalographic data to map out the specific representational constraints that are constructed as each word is heard, and to determine how these constraints guide the interpretation of subsequent words in the utterance. Comparing context-independent semantic models of the DO noun with contextually constrained noun models reflecting the semantic properties of the preceding verb, we found that only the contextually constrained model showed a significant fit to the brain data. Pattern-based measures of directed connectivity across the left hemisphere language network revealed a continuous information flow among temporal, inferior frontal, and inferior parietal regions, underpinning the verb's modification of the DO noun's activated semantics. These results provide a plausible neural substrate for seamless real-time incremental interpretation on the observed millisecond time scales.

Highlights

  • IntroductionThe listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge

  • Human speech comprehension is remarkable for its immediacy and rapidity

  • While this research provides an overall picture of the brain regions underpinning semantic combination, relatively little is known about the specific neural dynamics of these processes, or about the combinatorial mechanisms by which the meaning of each word is selectively integrated into its utterance context

Read more

Summary

Introduction

The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., “eat the apple”). Central to these are the processes involved in semantic composition, whereby the meanings of words are combined into more complex representations, such as the combination of a modifier and noun (e.g., “green dress”) or, as in the current study, a verb and its direct object (DO) noun (e.g., “eat the apple”) These combinatorial processes form the backbone of the incremental interpretation of spoken language, enabling listeners to integrate the meaning of each word as it is heard into a dynamically modulated multilevel representation of the preceding words of the utterance. Recent neuroimaging studies have identified the left angular gyrus (LAG) [9,10,11] as well as the LATL [12, 13] as regions involved in semantic combination, with a recent magnetoencephalographic (MEG) study showing that LATL activity precedes activity in the frontal cortex during combinatory semantic processing [13]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call