Abstract
A fundamental issue in spoken language comprehension involves understanding the interaction of linguistic representations across different levels of organization (e.g., phonological, lexical, syntactic, and semantic). In particular, there is debate about when different levels are accessed during spoken word recognition. Under serial processing models, comprehension is sequential. In contrast, under parallel processing models, simultaneous activation of representations at multiple levels can occur. The current study investigates this issue by isolating neural responses to syntactic class distinctions from acoustic and phonological responses. EEG data were collected in an event-related potential (ERP) experiment in which participants (N = 26) listened to words varying in syntactic class (nouns versus adjectives) that were controlled for low-level acoustic differences via cross-splicing. Machine learning techniques were used to decode syntactic class from ERP responses over time. Results showed that syntactic class is decodable approximately 160–190 ms after the average syntactic point of disambiguation in the words, during which listeners are still processing acoustic information. This supports the prediction that different levels of representation have overlapping timecourses. Overall, these results are consistent with a parallel, interactive processing model of spoken word recognition, in which higher-level information—such as syntactic class—is accessed while acoustic analysis is still occurring.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have