Abstract

This paper presents a novel scheme for feature extraction for face recognition by fusing local and global discriminant features. The facial changes due to variations of pose, illumination, expression, etc. are often appeared only some regions of the whole face image. Therefore, global features extracted from the whole image fail to cope with these variations. To address these problems, face images are divided into a number of non-overlapping sub-images and then G-2DFLD method is applied to each of these sub-images as well as to the whole image to extract local and global discriminant features, respectively. The G-2DFLD method is found to be superior to other appearance-based methods for feature extraction. All these extracted local and global discriminant features are then fused to get a large feature vector. Its dimensionality is then reduced by the PCA technique to decrease overall complexity of the system. A multi-class SVM is used as a classifier for recognition based on these reduced features. The proposed method was evaluated on two popular face recognition databases, the AT&T (formerly ORL) and the UMIST face databases. The experimental results show that the new method outperforms the global features extracted by the PCA, 2DPCA, PCA+FLD, 2DFLD and G-2DFLD methods in terms of face recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call