Abstract

As the study of embodiment and multimodality in interaction grows in importance, there is a need for novel methodological approaches to understand how multimodal variables pattern together along social and contextual lines, and how they systematically coalesce in communicative meanings. In this work, we propose to adopt computational tools to generate replicable annotations of bodily variables: these can be examined statistically to understand their patterning with other variables across diverse speakers and interactional contexts, and can help organize qualitative analyses of large datasets. We demonstrate the possibilities thereby with a case study in head cant (side‐to‐side tilt of the head) in a dataset of video blogs and laboratory‐collected interactions, computationally extracting cant and prosody from video and audio and analyzing their interactions, looking at gender in particular. We find that head cant indexes an orientation towards the interlocutor and a sense of shared understanding, can serve a ‘bracketing’ function in interaction (for speakers to create parentheticals or asides), and has gendered associations with prosodic markers and interactional discourse particles.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.