Abstract

Little is known about the cognitive capacities underlying real-time accommodation in spoken language and how they may allow conversing speakers to adapt their speech production behaviors. This study first presents a simple attunement model that incorporates hypothesized capacities, with a focus on individual variability as one of those capacities. The model makes explicit predictions about observable convergence behaviors in interacting speakers, including that: i) the intrinsically more variable speaker of the two will be the one who converges to their partner, ii) this flexible speaker with higher baseline variability will exhibit a substantial decrease in variability and iii) a greater change in the variability between speaking solo and interacting with their partner. These predictions are supported by the results of the modeling simulations. To further test the model's predictions, we analyzed a behavioral dataset including acoustic and articulatory data from three pairs of interacting speakers participating in a maze navigation task as well as a like solo speech task. The amount of variability in the speech parameters of each dyad member was quantified using coefficient of variation. The experimental results parallel the simulation results, and taken together, this work indicates that structured variability is an illuminating index of individual speaker adaptability and convergence behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call