Abstract

Facial expression recognition (FER) methods based on single-source facial data often suffer from reduced accuracy or unpredictability due to facial occlusion or illumination changes. To address this, a new technique called Fusion-CNN is proposed. It improves accuracy by extracting hybrid features using a β-skeleton undirected graph and an ellipse with parameters trained using a 1D-CNN. In addition, a 2D-CNN is trained on the same image. The outputs from these two subnetworks are fused, and their features are concatenated to create a feature vector for classification in a deep neural network. The proposed method is evaluated on four public face datasets: the extended Cohn-Kanade (CK+) dataset, the Japanese Female Facial Expression (JAFFE) dataset, Karolinska Directed Emotional Faces (KDEF), and Oulu-CASIA. The experimental results show that Fusion-CNN outperforms other algorithms, achieving recognition accuracy of 98.22%, 93.07%, 90.30%, and 90.13% for the CK+, JAFFE, KDEF, and Oulu-CASIA datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call