The goal of this paper is to bring new insights to the study of social learning processes by designing measures of collaboration using high-frequency sensor data. More specifically, we are interested in understanding the interplay between moments of collaboration and cooperation, which is an understudied area of research. We collected a multimodal dataset during a collaborative learning activity typical of makerspaces: learning how to program a robot. Pairs of participants were introduced to computational thinking concepts using a block-based environment. Mobile eye-trackers, physiological wristbands, and motion sensors captured their behavior and social interactions. In this paper, we analyze the eye-tracking data to capture participants’ tendency to synchronize their visual attention. This paper provides three contributions: (1) we use an emerging methodology (mobile dual eye-tracking) to capture joint visual attention in a co-located setting and replicate findings that show how levels of joint visual attention are positively correlated with collaboration quality; (2) we qualitatively analyzed the co-occurrence of verbal activity and joint visual attention in low and high performing groups to better understand moments of collaboration and cooperation; (3) inspired by the qualitative observations and theories of collaborative learning, we designed a new quantitative measure that captures cycles of collaborative and cooperative work. Compared to simple measures of joint visual attention, we found it to increase correlation coefficients with learning and collaboration scores. We discuss those results and describe how advances in analyzing sensor data can contribute to theories of collaboration. We conclude with implications for capturing students’ interactions in co-located spaces using Multimodal Learning Analytics (MMLA).
Read full abstract