Abstract

Previous research has demonstrated that speakers readily entrain to one another in synchronized speech tasks (e.g., Cummins 2002, Vatikiotis-Bateson et al. 2014, Natif et al. 2014), but the mixture of auditory and visual cues they use to achieve such alignment remains unclear. In this work, we extend the dual-EMA paradigm of Tiede et al. (2012) to observe the speech and coordinated head movements of speaker pairs interacting face-to-face during synchronized production in three experimental tasks: the “Grandfather” passage, repetition of short rhythmically consistent sentences, and competing alternating word pairs (e.g., “topper-cop” vs. “copper-top"). The first task was read with no eye contact, the second was read and then produced with eye contact, and the third required continuous eye contact. Head movement was characterized using the tracked position of the upper incisor reference sensor. Prosodic prominence was identified using F0 and amplitude contours from the acoustics, and gestural stiffness on articulator trajectories. Preliminary results show that frequency and amplitude of synchronized head movement increased with task eye contact, and that this was coordinated systematically with both acoustic and articulatory prosodic prominence. [Work supported by NIH.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call