Abstract

Automatic facial expression recognition (FER) has been extensively studied owing to its wide range of applications, such as in e-learning platforms used to automatically collect the feedback of students regarding a particular content and to help children with autism have a better understanding of their environment. Owing to the advances made in the fields of machine learning and computational devices, researchers are developing more accurate and robust facial expression recognition frameworks. In this paper, we propose a completely new framework for person-independent FER based on combining textural and shape features from 49 detected landmarks in an input facial image. The shape information is extracted using the histogram of oriented gradients (HOG) applied on a binary patch generated by interpolating the locations of the 49 detected landmarks. The textural information is computed from 49 sub-images, each centered on one landmark, using a new handcrafted descriptor that we also propose herein and is referred to as Orthogonal and Parallel-based Directions Generic Quad Map Binary Patterns (OPD-GQMBP). OPD-GQMBP encodes the relevant information based on the orthogonality and parallelism of the geometries to select the prominent pixels within a n×n neighborhood. The proposed framework outperforms many previous state-of-the-art methods including deep-learning-based approaches on five widely used benchmarks: CK+, KDEF, JAFFE, Oulu-Casia VIS, and RaFD, through the Leave-One-Subject-Out evaluation protocol. In addition, the superiority of the OPD-GQMBP descriptor is fairly proven against 10 deep features (e.g., VGG, ResNets, DenseNet, GoogeLeNet, and Inception) and 12 recent and powerful LBP variants.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call