Abstract

With its potential to revolutionize a wide range of applications, including lie detection, social robotics, and driver fatigue detection, facial expression recognition is a field that is rapidly expanding. However, traditional machine learning methods have struggled with facial expression recognition due to limitations such as manual feature selection and limited representation capabilities. Additionally, these methods require large amounts of annotated data, which can be time-consuming and expensive to obtain. In order to overcome these difficulties, this paper suggests a novel method that builds recognition models using a multi-layer perceptron (MLP) and ResNet. This hybrid model offers improved performance over conventional CNN models, achieving an impressive accuracy rate of 85.71% on the FER_2013 dataset. Additionally, migration learning is used to increase the model's precision and avoid over-fitting. The FER_2013 dataset is used to train and test the model. The results of the trials show that the suggested model can recognize facial expressions while minimizing the overfitting problem typically associated with deep learning. The model will eventually include a self-attentive mechanism in the study in an effort to improve model resolution. By using it with color images, the team also hopes to increase the model's capacity for generalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call