Abstract

Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner’s nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of hand-over-face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of hand-over-face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in hand-over-face gestures when the difficulty level of the given exercise increases. The hand-over-face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%).

Highlights

  • Sir Richard Livingstone once said: “The test of successful education is not the amount of knowledge that pupils take away from school, but their appetite to know and their capacity to learn” (Livingstone 1941, p. 28).Understanding learners’ capacity to guide their learning in school and beyond has been a key topic of discussion among educational researchers, policy-makers and practicing educators alike

  • The aim is to split the subset into student-wise training and testing images, so that the model can be tested on unseen students to ensure the real-world application

  • We have explored the variety of nonverbal behaviors (emotions, head movements, head pose, eye gaze, hand-over-face (HoF) gestures), which can be integrated with the modern learning technologies to recognize students affective states in real-time

Read more

Summary

Introduction

Sir Richard Livingstone once said: “The test of successful education is not the amount of knowledge that pupils take away from school, but their appetite to know and their capacity to learn” (Livingstone 1941, p. 28).Understanding learners’ capacity to guide their learning in school and beyond has been a key topic of discussion among educational researchers, policy-makers and practicing educators alike. There has been an increasing interest in developing robotic tutors (Gordon et al 2016; Benitti 2012; Jones et al 2015), elearning and Intelligent Tutoring Systems (ITSs) (Andallaza et al 2012; Woolf 2009; Woolf et al 2009; D’Mello et al 2005; Graesser et al 2007; Litman and Forbes-Riley 2004) that would provide individualized teaching in multiple domains Such systems often infer affective states based solely on facial expressions and are capable of personalization to some extent, but they lack the required empathic capabilities, i.e. ability to fully interpret the emotions, moods and temperaments of learners

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.