Abstract
This study investigates how listeners use information regarding articulatory timing in consonant sequences during lexical access. Research in spoken word recognition of assimilated consonant sequences has shown that listeners make use of detailed variation in the acoustic signal and knowledge of systematic phonological variability [e.g., Gaskell and Marslen‐Wilson (1996), Gow and McMurray (to appear)]. However, no earlier work has reported on real‐time processing of consonant sequences for which articulatory data is available. This study combines articulator movement‐tracking with real‐time eye‐movement monitoring to relate the articulatory timing properties associated with assimilation with use of the resulting acoustic signal in word recognition. Using magnetometry, words to be used as stimuli in an eye‐tracking task are recorded. These contain consonant sequences with a continuum of intergestural overlap obtained by varying prosody and speech rate. In the eye‐tracking experiment, listeners hear two‐word stimuli such as ‘‘bad ban,’’ some of which are perceived with an assimilated final consonant in the first word, and select a picture that matches the stimulus. The recorded eye movements provide a window into real‐time processing [see Altmann and Kamide (2004)], allowing us to investigate listener behavior in using articulatory timing cues for lexical access. [Supported by NIH.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.