Abstract

Facial expressions (FEs) are one of the most preeminent means of conveying one’s emotions and are pivotal to nonverbal communication. Potential applications in a wide range of areas in computer vision have lent a strong impetus to research in the domain of automatic facial expression recognition. This work discusses the effectiveness of two optical flow-based features for modeling the FEs associated with prototypic emotions based on the pattern of nonrigid deformable motion of facial components occurring during their portrayal. The discernible motion patterns are categorized into distinct discrete classes with the descriptive features indicating the global spatial distribution of deformation derived from the dense optical flow field associated with emotional and neutral face images. Results obtained with evaluation on images and video clips taken from Extended Cohn-Kanade, Japanese Female Facial Expressions, and Dynamic Karolinska Directed Emotional Faces datasets with multi-class support vector machine and [Formula: see text]-nearest neighbor classifiers are competent with the state-of-the-art techniques and concordant with empirical psychological studies in emotion science.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call