Abstract

Online education has developed rapidly due to its irreplaceable convenience. Under the severe circumstances caused by COVID-19 recently, many schools around the world have delayed opening and adopted online education as one of the main teaching methods. However, the efficiency of online classes has long been questioned. Compared with traditional face-to-face classes, there is a lack of direct, timely, and effective communication and feedback between teachers and students in the online courses. Previous studies have shown that there is a close and stable relationship between a person’s facial expressions and emotions generally. From the perspective of computer simulation, a framework combining a face expression recognition (FER) algorithm with online courses platforms is proposed in this work. The cameras in the devices are used to collect students’ face images, and the facial expressions are analyzed and classified into 8 kinds of emotions by the FER algorithm. An online course containing 27 students conducted on Tencent Meeting is used to test the proposed method, and the result proved that this method performs robustly in different environments. This framework can also be applied to other similar scenarios such as online meetings.

Highlights

  • Facial expression is one of the most powerful, natural, and universal signals for human beings to convey their emotional states and intentions regardless of national borders, race, and gender [1, 2], and there were multitudinous related applications such as the health management [3], aided driving [4, 5], and others [6,7,8,9]

  • In order to test the performance of the proposed framework in practical applications, we captured an image that includes 27 people from an online meeting held on Tencent Meeting and input it into the convolutional neural networks (CNNs) model. is image is taken before the end of the meeting; the moderator was making a concluding speech in a pleasant atmosphere

  • Everyone was told that the meeting was coming to an end, according to the experiment conducted by Tonguç and Ozkara [61], students’ happiness will be significantly improved within a few minutes before the end of a lecture, so under the similar circumstance, it can be inferred that the emotions presented by most of the faces in this image are happy or neutral

Read more

Summary

Introduction

Facial expression is one of the most powerful, natural, and universal signals for human beings to convey their emotional states and intentions regardless of national borders, race, and gender [1, 2], and there were multitudinous related applications such as the health management [3], aided driving [4, 5], and others [6,7,8,9]. With the development of processing capabilities and computer simulation, all kinds of machine learning algorithms, such as Artificial Neural Networks (ANNs), Support Vector Machines (SVM), and Bayesian classifiers, were applied to FER, and the high accuracy has been verified in controlled environments so that the faces can be detected effectively These methods were weak in generalization ability while this is the key to evaluate the practicality of a model [40]. By combining the existing online education platforms with facial expression recognition model based on the architecture of convolutional neural network, this work proposed a framework that enables real-time monitoring of students’ emotions in online courses and ensures that the feedback expressed by facial expression can be provided to teachers timely, so that they can flexibly adjust the teaching programs and improve the quality and efficiency of online education. In order to prevent overfitting, a Dropout is added after each of the 2 fully connected layers, which will release a part of neurons according to the presetting drop-probability; in this paper, the 2 values are both set to 0.5. e following output layer is composed of 8 units, and softmax [60] is adopted as the activation function to classify the expressions examined in terms of anger, disgust, fear, happiness, sadness, surprise, contempt, and neutral

Results
Experiment and Results
Conclusion and Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.