Abstract

Feature extraction and representation are critical in facial expression recognition. The facial features can be extracted from either static images or dynamic image sequences. However, static images may not provide as much discriminative information as dynamic image sequences. On the other hand, from the feature extraction point of view, geometric features are often sensitive to the shape and resolution variations, whereas appearance based features may contain redundant information. In this paper, we propose a component-based facial expression recognition method by utilizing the spatiotemporal features extracted from dynamic image sequences, where the spatiotemporal features are extracted from facial areas centered at 38 detected fiducial interest points. Considering that not all features are important to the facial expression recognition, we use the AdaBoost algorithm to select the most discriminative features for expression recognition. Moreover, based on median rule, mean rule, and product rule of the classifier fusion strategy, we also present a framework for multi-classifier fusion to improve the expression classification accuracy. Experimental studies conducted on the Cohn-Kanade database show that our approach that combines both boosted component-based spatiotemporal features and multi-classifier fusion strategy provides a better performance for expression recognition compared with earlier approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call