Abstract

Although many studies have observed a close relationship between prosodic structure and co-speech gestures, little is understood about cross-modal gestural coordination. The present study examines the relationship between articulatory and co-speech gestures at prosodic boundaries and under prominence, focusing on non-referential manual and eyebrow beat gestures in Korean, a language in which co-speech gestures are virtually unexplored. This study hypothesizes that prosodic structure systematically governs the production of both speech and co-speech gestures and their temporal organization. Multimodal signals of a story reading were collected from eight speakers (5F, 3M). The lips, tongue, and eyebrows were point-tracked using EMA, and the vertical manual movements obtained from a video recording were auto-tracked using a geometrical centroid tracking method. Measurements taken included the duration of intervals from the timepoint of concurrent beat gesture onset and target to 1) consonant gesture onset and target, 2) vowel gesture onset and target, 3) pitch gesture (F0) onset and target, and 4) phrasal boundaries. Results reveal systematic inter-articulator coordination patterns, suggesting that beat gestures co-occurring with speech gestures are recruited to signal information grouping and highlighting. The findings are discussed with reference to the nature of prosodic representation and models of speech planning. [Work supported by NSF.]

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.