Abstract
This paper presents three multimodal learning analytic approaches from a hands-on learning activity. We use video, audio, gesture and bio-physiology data from a two-condition study (N = 20), to identify correlations between the multimodal data, experimental condition, and two learning outcomes: design quality and learning. The three approaches incorporate: 1) human-annotated coding of video data, 2) automated coding of gesture, audio and bio-physiological data and, 3) concatenated human-annotated and automatically annotated data. Within each analysis we employ the same machine learning and sequence mining techniques. Ultimately we find that each approach provides different affordances depending on the similarity metric and the dependent variable. For example, the analysis based on human-annotated data found strong correlations among multimodal behaviors, experimental condition, success and learning, when we relaxed constraints on temporal similarity. The second approach performed well when comparing students’ multimodal behaviors as a time series, but was less effective using the temporally relaxed similarity metric. The take-away is that there are several strategies for doing multimodal learning analytics, and that many of these approaches can provide a meaningful glimpse into a complex data set, glimpses that may be difficult to identify using traditional approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Artificial Intelligence in Education
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.