Abstract

In this paper, we propose a novel transductive transfer linear discriminant analysis (TTLDA) approach for cross-pose facial expression recognition (FER), in which training and testing facial images are taken under the two different facial views. The basic idea of the proposed expression recognition method is to choose a set of auxiliary unlabelled facial images from target facial pose and leverage it into the labelled training image set of source facial pose for discriminant analysis, where the labels of the auxiliary images are parameters of TTLDA to be optimized. After learning the class labels of the auxiliary image set, we train a support vector machine (SVM) for classifying the testing facial images based on them. On the other hand, to make full utilize the facial appearance information of color images for improving expression recognition accuracy, we adopt color scale invariant feature transform (SIFT) to describe facial image feature. Finally, we conduct experiments on BU-3DFE and Multi-PIE multiview color facial expression databases to evaluate the proposed cross-pose FER method and compare the results with other methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.