Abstract

Using multimodal data fusion techniques, we built and tested prediction models to track middle-school student distress states during educational gameplay. We collected and analyzed 1,145 data instances, sampled from a total of 31 middle-school students’ audio- and video-recorded gameplay sessions. We conducted data wrangling with student gameplay data from multiple data sources, such as individual facial expression recordings and gameplay logs. Using supervised machine learning, we built and tested candidate classifiers that yielded an estimated probability of distress states. We then conducted confidence-based data fusion that averaged the estimated probability scores from the unimodal classifiers with a single data source. The results of this study suggest that the classifier with multimodal data fusion improves the performance of tracking distress states during educational gameplay, compared to the performance of unimodal classifiers. The study finding suggests the feasibility of multimodal data fusion in developing game-based learning analytics. Also, this study proposes the benefits of optimizing several methodological means for multimodal data fusion in educational game research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call