Abstract

Listeners use lexical information to modify the mapping between speech acoustics and speech sound categories. Despite convention to consider lexically guided perceptual learning as a binary outcome, the magnitude of the learning effect varies in the extant literature. We hypothesize that graded learning outcomes can be linked, in part, to statistical characteristics of the to-be-learned input, consistent with the ideal adapter theory of speech adaptation. Following standard methods (i.e., waveform averaging to create ambiguous variants), a lexically guided perceptual learning stimulus set for the /ʃ/-/s/ contrast was created for each of 16 talkers, yielding variability in the statistical cues specifying this contrast across talkers. Experiment 1 will (a) measure lexically guided perceptual learning for each talker, (b) identify input characteristics that are associated with learning magnitude, and (c) examine whether a computational instantiation of the ideal adapter theory can model the input-learning link. Experiment 2 will provide a confirmatory test of the patterns observed in Experiment 1 by manipulating the to-be-learned input holding talker constant. The results will provide a critical test of the ideal adapter framework for speech adaptation, thus informing an understanding of the mechanisms that allow listeners to solve the lack of invariance problem for speech perception.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.