Abstract

Facial expressions (FEs) are one of the most preeminent means of conveying one’s emotions and are pivotal to nonverbal communication. Potential applications in a wide range of areas in computer vision have lent a strong impetus to research in the domain of automatic facial expression recognition. This work discusses the effectiveness of two optical flow-based features for modeling the FEs associated with prototypic emotions based on the pattern of nonrigid deformable motion of facial components occurring during their portrayal. The discernible motion patterns are categorized into distinct discrete classes with the descriptive features indicating the global spatial distribution of deformation derived from the dense optical flow field associated with emotional and neutral face images. Results obtained with evaluation on images and video clips taken from Extended Cohn-Kanade, Japanese Female Facial Expressions, and Dynamic Karolinska Directed Emotional Faces datasets with multi-class support vector machine and [Formula: see text]-nearest neighbor classifiers are competent with the state-of-the-art techniques and concordant with empirical psychological studies in emotion science.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.