Abstract

Despite recent advances in MOOC, the current e-learning systems have advantages of alleviating barriers by time differences, and geographically spatial separation between teachers and students. However, there has been a 'lack of supervision' problem that e-learner's learning unit state(LUS) can't be supervised automatically. In this paper, we present a fusion framework considering three channel data sources: 1) videos/images from a camera, 2) eye movement information tracked by a low solution eye tracker and 3) mouse movement. Based on these data modalities, we propose a novel approach of multi-channel data fusion to explore the learning unit state recognition. We also propose a method to build a learning state recognition model to avoid manually labeling image data. The experiments were carried on our designed online learning prototype system, and we choose CART, Random Forest and GBDT regression model to predict e-learner's learning state. The results show that multi-channel data fusion model have a better recognition performance in comparison with single channel model. In addition, a best recognition performance can be reached when image, eye movement and mouse movement features are fused.

Highlights

  • As compared to the traditional classroom, the learners’ emotions, lack of concentration or motivation can not be monitored dynamically or in real-time in digital learning environments [1]

  • 1) We propose an integrated computational framework to characterize and quantify multi-dimensional engagement in e-learning environment from three facets: affect, behavior and cognition, which provides a new insight into computer-based learning analysis

  • VISUALLY MONITORING LEARNING ENGAGEMENT to select the most appropriate models or parameters for monitoring learner’s facial expressions, eye movement behaviors and short video course performance, we carried out a series of comparative experiments

Read more

Summary

INTRODUCTION

As compared to the traditional classroom, the learners’ emotions, lack of concentration or motivation can not be monitored dynamically or in real-time in digital learning environments [1]. The emotions and eye behaviors exhibit the objectivity of student’s state and they are easy to observe, while the cognitive state emphasizes the learner’s mental and psychological state, such as understanding, self-regulation or meta-cognition To recognize these states, we employ multichannel sensory data: video streams captured by a camera, eye movement information captured by a low cost eye tracker named Tobii Eye Tracker 4C and mouse dynamic log from a standard mouse. 1) We propose an integrated computational framework to characterize and quantify multi-dimensional engagement in e-learning environment from three facets: affect, behavior and cognition, which provides a new insight into computer-based learning analysis. In this framework, three different channel data (video, eye movement and mouse dynamic) is captured through low cost devices without using intrusive nor wearable equipment.

RELATED WORKS
EXPERIMENTS
Findings
CONCLUSION AND DISCUSSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.