Abstract

The neural network supporting aspects of syntactic, prosodic, and semantic information processing is specified on the basis of two experiments using functional magnetic resonance imaging (fMRI). In these two studies, the presence/absence of lexical-semantic and syntactic information is systematically varied in spoken language stimuli. Inferior frontal and temporal brain areas in the left and the right hemisphere are identified to support different aspects of auditory language processing. Two additional experiments using event-related brain potentials investigate the possible interaction of syntactic and prosodic information, on the one hand, and syntactic and semantic information, on the other. While the first two information types were shown to interact early during processing, the latter two information types do not. Implications for models of auditory language comprehension are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call