Facial expressions are critical indicators of human emotions where recognizing facial expressions has captured the attention of many academics, and recognition of expressions in natural situations remains a challenge due to differences in head position, occlusion, and illumination. Several studies have focused on recognizing emotions from frontal images only, while in this paper wild images from the FER2013 dataset have been used to make a more generalizing model with the existence of its challenges, it is among the most difficult datasets that only got 65.5 % accuracy human-level. This paper proposed a model for recognizing facial expressions using pre-trained deep convolutional neural networks and the technique of transfer learning. this hybrid model used a combination of two pre-trained deep convolutional neural networks, training the model in multiple cases for more efficiency to categorize the facial expressions into seven classes. The results show that the best accuracy of the suggested models is 74.39% for the hybrid model, and 73.33% for Fine-tuned the single EfficientNetB0 model, while the highest accuracy for previous methods was 73.28%. Thus, the hybrid and single models outperform other state of art classification methods without using any additional, the hybrid and single models ranked in the first and second position among these methods. Also, The hybrid model has even outperformed the second-highest in accuracy method which used extra data. The incorrectly labeled images in the dataset unfairly reduce accuracy but our best model recognized their actual classes correctly.
Read full abstract