Abstract

Large-scale facial expression datasets are primarily composed of real-world facial expressions. Expression occlusion and large-angle faces are two important problems affecting the accuracy of expression recognition. Moreover, because facial expression data in natural scenes commonly follow a long-tailed distribution, trained models tend to recognize the majority classes while recognizing the minority classes with low accuracies. To improve the robustness and accuracy of expression recognition networks in an uncontrolled environment, this paper proposes an efficient network structure based on an attention mechanism that fuses global and local features (AM-FGL). We use a channel spatial model and local feature convolutional neural networks to perceive the global and local features of the human face, respectively. Because the distribution of real-world scene field expression datasets commonly follows a long-tail distribution, where neutral and happy expressions account for the tail expressions, a trained model exhibits low recognition accuracy for tail expressions such as fear and disgust. CutMix is a novel data enhancement method proposed in other fields; thus, based on the CutMix concept, a simple and effective data-balancing method is proposed (BC-EDB). The key idea is to paste key pixels (around eyes, mouths, and noses), which reduces the influence of overfitting. Our proposed method is more focused on the recognition of tail expression, occluded expression, and large-angle faces, and we achieved the most advanced results in occlusion-RAF-DB, 30∘ pose-RAF-DB, and 45∘ pose-RAF-DB with accuracies of 86.96%, 89.74%, and 88.53%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.