Abstract
Multimodal online learning environment improves learning experience through different modalities such as visual, auditory, and kinesthetic interactions. Multimodal learning analytics (MMLA) with multiple biosensors provides a way to overcome the challenge of analyzing the multiple interaction types simultaneously. Galvanic skin response/electrodermal activity (GSR/EDA), eye tracking and facial expression were used to measure the learning interaction in a multimodal online learning environment. iMotions and R software were used to post-process and analyze the time-synchronized biosensor data. GSR/EDA, eye tracking and facial expression showed real-time cognitive, emotional, and visual learning engagement for each interaction type. There is a tremendous potential for using MMLA with multiple biosensors to understand learning engagement in a multimodal online learning environment was shown in this study.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.