Abstract

Lexically-guided and visually-guided perceptual learning have been argued to tap into the same general perceptual mechanism. Using the visually-guided paradigm, some have argued that the resulting retuning effect is specific to the phonetic context in which it is learned; which in turn has been used to argue that such retuning targets context-dependent sub-lexical units. We use three new experiments to study the generalizable nature of lexically-guided perceptual learning of fricative consonants and how type variation in the training stimuli affects it. In contrast to visually-guided retuning, we show that lexical retuning does generalize to new phonetic contexts, particularly when listeners are trained with type variation. This suggests that there is an abstract context-independent representation that is used in speech perception and during lexical retuning. While the same generalization is not clearly observed when type variation is eliminated, the lack of a clear interaction effect between training types prevents us from inferring that lexically-guided perceptual learning needs type variation within the training stimuli to generalize to new phonetic contexts. Furthermore, we point out that some of these effects are subtle and are only observable if we take into account pre-training group difference between the control and test groups.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call