Abstract

This paper proposes the use of redundant features for efficient recognition of faces in still images using a novel system framework that offers a detailed systematic workflow for solving the facial recognition problem. It accepts still frontal face images, processes and represents salient facial features (face, eyes, nose, and mouth) using facial detection and extraction techniques. The extracted features are then modeled by an ensemble of self-organizing maps. The ensemble outputs are later reassembled into a single dataset consisting of a normalized image Euclidean distance matrix to enhance the search space for optimal convergence and classification. The feasibility of the framework is tested using an experiment facial database captured during the study, and three benchmark facial expression databases, namely the Extended Cohn–Kanade (CK+) database, the Japanese Female Facial Expressions database, and the MMI Facial Expression database. The results suggest that feature redundancy is indeed useful for efficient facial recognition, as the support vector machine classification recorded high accuracies across the various databases, with the normalized image Euclidean distance dataset producing the highest performance, when compared with the localized principal component analysis and unnormalized image Euclidean distance datasets. Furthermore, overall classification accuracy of above 99% was achieved for the experiment (nonexpressive still face) database, compared with the benchmark facial expression databases, which yielded slightly lower results. A future direction of this work is further improvement of the framework to robustly handle severe facial variations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call