Abstract

A controversial issue in spoken language comprehension concerns whether different sources of information are encapsulated from each other. Do listeners finish processing lower-level information (e.g., encoding acoustic differences) before beginning higher-level processing (e.g., determining the meaning of a word or its grammatical status)? We addressed these questions by examining the time-course of processing using an event-related potential experiment with a component-independent design. Listeners heard voiced/voiceless minimal pairs differing in (1) lexical status, (2) syntactic class (noun/verb distinctions), and (3) semantic content (animate/inanimate distinctions). For each voiced stimulus in a given condition (e.g., lexical status pair TUB/tup), there was a corresponding pair with a voiceless ending (tob/TOP). Stimuli were cross-spliced, allowing us to control for phonological and acoustic differences and examine higher-level effects independently of them. Widespread lexical status effects are observed shortly after the cross-splicing point (i.e., the time when the lexical properties of the word can first be determined) and persist for an extended time. Moreover, there is considerable overlap in the times during which both lexical status and semantic content effects are observed. These results suggest that listeners multiple types of linguistic representations are active simultaneously during spoken word recognition, consistent with parallel processing models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call