Abstract
Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions and to recognize human emotions. Recently, deep learning models are utilized to improve the performance of facial emotion recognition. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, an efficient deep learning-based facial emotion recognition model is proposed. Initially, contrast-limited adaptive histogram equalization (CLAHE) is applied to improve the visibility of input images. Thereafter, a modified joint trilateral filter is applied to the obtained enhanced images to remove the impact of impulsive noise. Finally, an efficient deep convolutional neural network is designed. Adam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are conducted by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models
Highlights
Extracting visual information from images is one of the fundamental skills of human intelligence [1]
Facial emotion recognition represents the content of an input image in the form of human emotions by using various machines and deep learning models [6]
A novel deep convolutional neural network model is proposed to recognize the human emotions from facial images
Summary
Extracting visual information from images is one of the fundamental skills of human intelligence [1]. Facial emotion recognition represents the content of an input image in the form of human emotions by using various machines and deep learning models [6] It initially extracts the face information and thereafter, it provides a descriptive emotion [7]. Lakshmi et al (2021) [21] implemented a modified histogram of oriented gra dients (HOG) and local binary pattern (LBP), i.e., HOGLBP to extract the features These methods achieve good performance, but suffer from the overfitting issue. Facial expressions were learned through a deep sparse autoencoder network It overcomes the issues such as gradient diffusion and local extrema in training of a model.Tan et al [28] recognized facial expression using EEG and multimodal emotion recognition method. The existing models perform poorly for 75 images which have poor visibility and noise
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.