Abstract

Emotion recognition is a dynamic process that focuses on a person's emotional state, which implies that the emotions associated with each individual's activities are unique. Human emotion analysis and recognition have been popular study areas among computer vision researchers. High dimensionality, execution time, and cost are the main difficulties in human emotion detection. To deal with these issues, the proposed model aims to design a human emotion recognition model using Residual Networks-101 (ResNet-101). A Convolutional Neural Network (CNN) design called ResNet-101 solves the vanishing gradient issue and makes it possible to build networks with thousands of convolutional layers that outperform networks with fewer layers. An image dataset was used for this emotion recognition. Then, this image dataset was subjected to preprocessing to resize the image and eliminate the noise contents present in the images. After preprocessing, the image was given to the classifier to recognize the emotions effectively. Here, ResNet-101 was used for the classification of six classes. The experimental results demonstrate that ResNet-101 models outperform the most recent techniques for emotion recognition. The proposed model was executed in MATLAB software and carried out several performance metrics. The proposed architecture attained better performance in terms of accuracy 92% and error with 0.08 and other performances like 92% of precision, 85% of specificity and 98% of sensitivity so on, and this shows the effectiveness of the proposed model to existing approaches such as LeNet, AlexNet and VGG. In comparison to current techniques, the suggested model provides improved recognition accuracy for low intensity or mild emotional expressions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.