Abstract

<p>Facial expression helps to communicate between the people for conveying abundant information about human emotions. Facial expression classification is applied in various fields such as remote learning education, medical care, and smart traffic. However, due to the complexity and diversity of the facial emotions, the present facial expression recognition model causes a low recognition rate and it is hard to extract the precise features that are related to facial expression changes. In order to overcome this problem, we proposed Multi-feature Integrated Concurrent Neural Network (MICNN) which is significantly different from the single neural network architectures. It aggregates the prominent features of facial expressions by integrating the three kinds of networks such as Sequential Convolutional Neural Network (SCNN), Residual Dense Network (RDN), and Attention Residual Learning Network (ARLN) to enhance the accuracy rate of facial emotions detection system. Additionally, Local Binary Pattern (LBP) and Principal Component Analysis (PCA) are applied for representing the facial features and these features are combined with the texture features identified by the Gray-level Co-occurrence Matrix (GLCM). Finally, the integrated features are fed into softmax layer to classify the facial images. The experiments are carried out on benchmark datasets by applying k-fold cross-validation and the results demonstrate the superiority of the proposed model.</p> <p> </p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call