Abstract

A critical question in speech perception is the relative independence of perceptual and semantic processing. Answering this requires addressing two issues. First, does perceptual processing complete before semantic processing (discrete stages vs. continuous cascades)? Second, do semantic expectations affect perceptual processing (feedback)? These questions are difficult to address as there are few measures of early perceptual processing for speech. We extend a recent electroencephalography (EEG) paradigm which has shown sensitivity to pre-categorical encoding of Voice-Onset Time (VOT; Toscano et al., 2010). By measuring the timecourse over which perceptual and semantic factors affect the neural signal, we quantify how these processes interact. Participants (N = 31) heard sentences (Good dogs also sometimes—) which biased them to expect a target word (bark rather than park). We manipulated VOT of the target word and coarticulation leading to it. A component-independent analysis determined when each cue affects the continuous EEG signal every 2 msec. This revealed an early window (125–225 msec) sensitive exclusively to bottom-up information, a late window (400–575 msec) sensitive to semantic information, and a critical intermediate window (225–350 msec) during which VOT and coarticulation are processed simultaneously with semantic expectations. This suggests continuous cascades and early interactions between perceptual and semantic processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call