Abstract

Conversation partners’ speech acoustics sometimes converge, providing evidence for the transfer of information from speech perception to one’s own speech productions. The nature of this phonetic convergence has remained elusive, with variable experimental findings regarding the contexts under which phonetic convergence emerges, the acoustic speech features it affects, and the gender of talkers most influenced. Here, we approach phonetic convergence through the lens of dimension-based statistical learning whereby the statistical regularities of short-term speech input impact the perceptual weight of acoustic dimensions in speech categorization. Participants passively listened to a randomly ordered sequence of 4 pier and 4 beer utterances, sampled across voice onset time (VOT) and fundamental frequency (F0) in a manner that aligned with English norms (Canonical) or departed from typical English pronunciations as an ‘accent’ (Reverse). Immediately after, participants categorized and repeated aloud an ambiguous test stimulus varying only in F0, with ambiguous VOT. The Reverse input regularities led both male and female participants to down-weight F0 in both perceptual categorization and in the acoustics of word repetitions. The results indicate that statistical learning across passive listening to another speaker’s voice can lead to detailed acoustic-phonetic adjustments in one’s own speech productions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call