Abstract

Listeners use lexical knowledge to guide the perception of phonetically ambiguous speech sounds and for retuning phonetic category boundaries. While phonetic ambiguity has many sources, diachronic sound changes represent a more systematic form of phonetic variation. Sound changes may neutralize the phonetic and phonological contrast between two lexical competitors, rendering lexical knowledge in perceptual learning paradigms an unreliable scaffold for boundary retuning, particularly in languages with a high proportion of monosyllabic lexical items (e.g., Cantonese). We present pilot results of an auditory-object-to-picture matching task for perceptual learning of English vowels in monosyllabic words. Seven-step [æ]-[ɑ] continua were created for three minimal pairs (e.g., “gnat”—“knot”). Images corresponding to the real word endpoints were presented at three different stimulus onset asynchrony (SOA) intervals: −250, 0, +250 ms. Participants responded as to whether the visual image matched the auditory token. Preliminary findings indicate that listeners deem the auditory token and picture a congruent match at higher proportions in the 0 ms SOA condition. Our future work analyzes the subsequent learning in response to this exposure method and uses this SOA in a Cantonese perceptual learning study focused on adaptation to lexical tone variation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call