Abstract

AbstractA distinctive feature of game‐based learning environments is their capacity to create learning experiences that are both effective and engaging. Recent advances in sensor‐based technologies such as facial expression analysis and gaze tracking have introduced the opportunity to leverage multimodal data streams for learning analytics. Learning analytics informed by multimodal data captured during students’ interactions with game‐based learning environments hold significant promise for developing a deeper understanding of game‐based learning, designing game‐based learning environments to detect maladaptive behaviors and informing adaptive scaffolding to support individualized learning. This paper introduces a multimodal learning analytics approach that incorporates student gameplay, eye tracking and facial expression data to predict student posttest performance and interest after interacting with a game‐based learning environment, Crystal Island. We investigated the degree to which separate and combined modalities (ie, gameplay, facial expressions of emotions and eye gaze) captured from students (n = 65) were predictive of student posttest performance and interest after interacting with Crystal Island. Results indicate that when predicting student posttest performance and interest, models utilizing multimodal data either perform equally well or outperform models utilizing unimodal data. We discuss the synergistic effects of combining modalities for predicting both student interest and posttest performance. The findings suggest that multimodal learning analytics can accurately predict students’ posttest performance and interest during game‐based learning and hold significant potential for guiding real‐time adaptive scaffolding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call