Abstract

Facial expression recognition (FER) methods based on single-source facial data often suffer from reduced accuracy or unpredictability due to facial occlusion or illumination changes. To address this, a new technique called Fusion-CNN is proposed. It improves accuracy by extracting hybrid features using a β-skeleton undirected graph and an ellipse with parameters trained using a 1D-CNN. In addition, a 2D-CNN is trained on the same image. The outputs from these two subnetworks are fused, and their features are concatenated to create a feature vector for classification in a deep neural network. The proposed method is evaluated on four public face datasets: the extended Cohn-Kanade (CK+) dataset, the Japanese Female Facial Expression (JAFFE) dataset, Karolinska Directed Emotional Faces (KDEF), and Oulu-CASIA. The experimental results show that Fusion-CNN outperforms other algorithms, achieving recognition accuracy of 98.22%, 93.07%, 90.30%, and 90.13% for the CK+, JAFFE, KDEF, and Oulu-CASIA datasets, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.