Abstract

Background: Facial expression recognition is a challenging field, evident by the ineffectiveness of current state-of-the-art techniques that aim to classify facial expressions. Despite showing high levels of accuracy, these methods perform poorly in real-life implementation. This poor performance is because the training sets used are usually simple, limited, and in a controlled lab environment. Methods: This paper explores newer datasets that consist of images taken in challenging conditions with many variations. Using such datasets improves the accuracy of classification because it exposes the model to a variety of samples. In addition, we used new performance metrics to reflect the challenging conditions for classification. We reviewed the current best techniques for expression recognition and laid out a method to design an improved deep neural network using AffectNet, a newer and more challenging dataset. The implementation method is an iterative process that trains a convolutional neural network on challenging datasets, evaluates the result, and improves the model by tweaking its parameters. The models are also evaluated with new metrics like cross-dataset accuracy and mean accuracy drop. Results: We found that the best performing model was the Visual Geometry Group 16 layer (VGG16) model, with a training accuracy of 81.05%, an improvement of 9.05% compared to AlexNet, the next best model trained on the same dataset, and testing accuracy of 70.69%, compared to 64% for AlexNet. The proposed model configuration was also assessed with cross-dataset accuracy scoring 42.02% and outperforming Inception V3, the next best model with a score of 28.96%, on the same metric. Conclusions: The research resulted in improved accuracy of classifying expressions due to a better, more challenging dataset. In addition, we used new metrics that give us a better picture of the model’s robustness.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.