Abstract

Four experiments investigated the novel issue of learning to accommodate the co-articulated nature of speech. Experiment 1 established a co-articulatory mismatch effect for a set of vowel–consonant (VC) syllables (reaction times were faster for co-articulation matching than for mismatching stimuli). A rhyme judgment training task on words (Experiment 2) or VC stimuli (Experiment 3) with mismatching information was followed by a phoneme monitoring task on a set of VC stimuli; training and test stimuli contained physically identical (same condition) or new (different condition) mismatching co-articulatory information (along with a set containing matching co-articulatory information). A third group received no training. A co-articulatory mismatch effect was found without training but not when the same mismatching tokens were used at training and test. Both word (Experiment 2) and syllable (Experiment 3) training stimuli eliminated the mismatch effect; overall reaction times were somewhat slower when the training stimuli were words. Perceptual learning generalized to new tokens only when the acoustic manifestation of the critical co-articulatory information in the training stimuli was sufficiently large (Experiments 3 and 4). The results are discussed in terms of speech processing and perceptual learning in speech perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call