Abstract

This paper presents an analysis of interactive alignment (Pickering & Garrod, 2004) from a multimodal perspective (Guichon & Tellier, 2017) in two telecollaborative settings. We propose a framework to analyze alignment during desktop videoconferencing in its multimodality, including lexical and structural alignment (Michel & Cappellini, 2019) as well as multimodal alignment involving facial expressions. We analyze two datasets coming from two different models of telecollaboration. The first one is based on the Français en (première) ligne model (Develotte et al., 2007) which puts future foreign language teachers in contact with learners of that language. The second one is based on the teletandem model (Telles, 2009), where students of two different mother tongues interact to help each other use and learn the other's language. The paper makes explicit a semi-automatic procedure to study alignment multimodally. We tested our method on a dataset that is composed of two 1-h sessions. Results show that in desktop videoconferencing-based telecollaboration, facial expression alignment is a pivotal component of multimodal alignment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call