Abstract

Sound contrasts are redundantly cued in the speech stream by acoustic features spanning various time scales. Listeners are presented with evidence for a particular category at various temporal intervals, and must coalesce this information into a coherent percept to accurately achieve recognition. Previous work on tone languages has shown that listeners prioritize consonants, then vowels, then lexical tone during phonological and word processing, despite lexical tone being a suprasegmental cue that unfolds with the vowel. We present an online eye-tracking study to assess the time course of Cantonese listeners' recognition of a target word (e.g., 包 /pa⌢u55/ `bun') with competitors for rime (北 /pak55/ `north'), onset (敲 /ha⌢u55/ `to knock'), and tone (爆 /pa⌢uu33/ `to explode') co-present on the screen. This design allows a test of the role of relative prioritization and contribution of consonant, vowel, and tone information in phonological processing. If vowels are prioritized before tones, we predict increased looking times to tone competitors. If vowels and tones are processed jointly, we predict equal looking times to both vowel and tone competitors. Data collection with Gorilla is ongoing. Data analysis will focus on overall proportions of looking-time to the target and competitors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call