Abstract

The facial expression recognition (FER) system is the process of identifying the emotional state of a person. Emotion recognition from facial expressions is a rapidly growing area of research with numerous applications, including psychology, marketing, and human-computer interaction. This paper presents a novel approach to FER employing deep learning techniques, specifically leveraging the power of transfer learning methodology. To address the challenge of accurately recognizing different emotions, including angry, disgusted, afraid, happy, sad, surprised, and neutral, from facial expressions, a pre-trained model MobileNetV2 architecture has been fine-tuned for improving accuracy. It is a lightweight convolutional neural network architecture, specifically designed for efficient on-device inference. For evaluating the performance of the proposed FER model, the state-of-the-art FER-2013 dataset, along random images and video clips has been employed. Experimental results demonstrate that the proposed FER model attained a remarkable accuracy rate exceeding 99% when tested on diverse sets of random images and video clips. Moreover, the system achieved a notable accuracy of 61% when evaluated against the FER-2013 dataset. Overall, this approach presents a significant advancement in the field of real-time facial expression recognition using the MobileNetV2 architecture with FER2013 dataset, which may improve the quality of human-computer interactions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.