Abstract
Several fundamental questions about speech perception concern how listeners understand spoken language despite considerable variability in speech sounds across diUerent contexts (the problem of lack of invariance in speech). This contextual variability is caused by several factors, including diUerences between individual talkers’ voices, variation in speaking rate, and eUects of coarticulatory context. A number of models have been proposed to describe how the speech system handles diUerences across contexts. Critically, these models make diUerent predictions about (1) whether contextual variability is handled at the level of acoustic cue encoding or categorization, (2) whether it is driven by feedback from category-level processes or interactions between cues, and (3) whether listeners discard Vne-grained acoustic information to compensate for contextual variability. Separating the eUects of cueand category-level processing has been diXcult because behavioral measures tap processes that occur well after initial cue encoding and are inWuenced by task demands and linguistic information. Recently, we have used the eventrelated brain potential (ERP) technique to examine cue encoding and online categorization. SpeciVcally, we have looked at diUerences in the auditory N1 as a measure of acoustic cue encoding and the P3 as a measure of categorization. This allows us to examine multiple levels of processing during speech perception and can provide a useful tool for studying eUects of contextual variability. Here, I apply this approach to determine the point in processing at which context has an eUect on speech perception and to examine whether acoustic cues are encoded continuously. Several types of contextual variability (talker gender, speaking rate, and coarticulation), as well as several acoustic cues (voice onset time, formant frequencies, and bandwidths), are examined in a series of experiments. The results suggest that (1) at early stages of speech processing, listeners encode continuous diUerences in acoustic cues, independent
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.