Abstract

In the past decade or so, subspace methods have been largely used in face recognition - generally with quite success. Subspace approaches, however, generally assume the training data represents the full spectrum of image variations. Unfortunately, in face recognition applications one usually has an under-represented training set. A known example is that posed by images bearing different expressions; i.e., where the facial expressions in the training image and in the testing image diverge. If the goal is to recognize the identity of the person in the picture, facial expressions are seen as distracters. Subspace methods do not address this problem successfully, because the feature-space learned is dependent over the set of training images available - leading to poor generalization results. In this communication, we show how one can use the deformation of the face (between the training and testing images) to solve the above defined problem. To achieve this, we calculate the facial deformation between the testing and each of the training images, project this result onto the (learned) subspace, and there weight each of the features (dimensions) inverse-proportionally to the estimated deformation. We show experimental results of our approach on those representations given by the following subspace techniques: principal components analysis (PCA), independent components analysis (ICA) and linear discriminant analysis (LDA). We also present comparison results with a number of known techniques and show the superiority of our weighted LDA algorithm over the rest.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call