Facial expression classification aims to recognize human emotion via face images. The major challenge of facial expression classification is how to extract discriminative features from the human face images to differentiate various emotions. To tackle this challenge, a new feature extraction approach is proposed in this paper. The proposed approach defines a new set of salient patterns at the facial keypoint locations. This is in contrast to conventional approaches that either represent the whole face image as a regular grid or use local image patches centered at all facial key point locations. Driven by the proposed salient patterns, both the geometric and textual features are extracted and then concatenated and further incorporated into a machine learning framework to perform facial expression classification. The proposed approach is evaluated in the well-known CK + benchmark dataset to demonstrate its superior performance.
Read full abstract