Abstract

Facial expression recognition is related to the automatic identification of affective states of a subject by computational means. Facial expression recognition is used for many applications, such as security, human-computer interaction, driver safety, and health care. Although many works aim to tackle the problem of facial expression recognition, and the discriminative power may be acceptable, current solutions have limited explicative power, which is insufficient for certain applications, such as facial rehabilitation. Our aim is to alleviate the current limited explicative power by exploiting explainable fuzzy models over sequences of frontal face images. The proposed model uses appearance features to describe facial expressions in terms of facial movements, giving a detailed explanation of what movements are in the face, and why the model is making a decision. The model architecture was selected to keep the semantic meaning of the found facial movements. The proposed model can discriminate between the seven basic facial expressions, obtaining an average accuracy of 90.8±14%, with a maximum value of 92.9±28%.

Highlights

  • IntroductionIn Computer Sciences, Facial Expression Recognition (FER) refers to the identification of emotions in images or video sequences of human faces by computational algorithms

  • Facial expressions simplify communicating emotions [1]

  • We presented the design, implementation, and experiments carried out with a simple model for recognition of facial based on facial movements of distinctive areas

Read more

Summary

Introduction

In Computer Sciences, Facial Expression Recognition (FER) refers to the identification of emotions in images or video sequences of human faces by computational algorithms. FER is important because of the applications it has in different domains, such as security, affective computing, sociology [2], and facial rehabilitation [3,4,5]. Many works have been reported to tackle FER These can be split into static and dynamic approaches. Dynamic approaches estimate differences between the face in a neutral state and the facial changes in a sequence of frontal images.

Objectives
Methods
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.