Abstract

Facial expression recognition (FER) is an important means for machines to understand the changes in the facial expression of human beings. Expression recognition using single-modal facial images, such as gray scale, may suffer from illumination changes and the lack of detailed expression-related information. In this study, multi-modal facial images, such as facial gray scale, depth, and local binary pattern (LBP), are used to recognize six basic facial expressions, namely, happiness, sadness, anger, disgust, fear, and surprise. Facial depth images are used for robust face detection initially. The deep geometric feature is represented by point displacement and angle variation in facial landmark points with the help of depth information. The local appearance feature, which is obtained by concatenating LBP histograms of expression-prominent patches, is utilized to recognize those expression changes that are difficult to capture by only the geometric changes. Thereafter, an improved random forest classifier based on feature selection is used to recognize different facial expressions. Results of comparative evaluations in benchmarking datasets show that the proposed method outperforms several state-of-the-art FER approaches that are based on hand-crafted features. The capability of the proposed method is comparable to that of the popular convolutional neural-network-based FER approach but with fewer demands for training data and a high-performance hardware platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call