Abstract
Facial expression recognition (FER) has been widely researched in recent years, with successful applications in a range of domains such as monitoring and warning of drivers for safety, surveillance, and recording customer satisfaction. However, FER is still challenging due to the diversity of people with the same facial expressions. Currently, researchers mainly approach this problem based on convolutional neural networks (CNN) in combination with architectures such as AlexNet, VGGNet, GoogleNet, ResNet, SENet. Although the FER results of these models are getting better day by day due to the constant evolution of these architectures, there is still room for improvement, especially in practical applications. In this study, we propose a CNN-based model using a residual network architecture for FER problems. We also augment images to create a diversity of training data to improve the recognition results of the model and avoid overfitting. Utilizing this model, this study proposes an integrated system for learning management systems to identify students and evaluate online learning processes. We run experiments on different datasets that have been published for research: CK+, Oulu-CASIA, JAFFE, and collected images from our students (FERS21). Our experimental results indicate that the proposed model performs FER with a significantly higher accuracy compared with other existing methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Journal of Advanced Computational Intelligence and Intelligent Informatics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.