Abstract

Face recognition is one of the most popular techniques to achieve the goal of figuring out the identity of a person. This study has been conducted to develop a new non-linear subspace learning method named “supervised kernel locality-based discriminant neighborhood embedding,” which performs data classification by learning an optimum embedded subspace from a principal high dimensional space. In this approach, not only nonlinear and complex variation of face images is effectively represented using nonlinear kernel mapping, but local structure information of data from the same class and discriminant information from distinct classes are also simultaneously preserved to further improve final classification performance. Moreover, in order to evaluate the robustness of the proposed method, it was compared with several well-known pattern recognition methods through comprehensive experiments with six publicly accessible datasets. Experiment results reveal that our method consistently outperforms its competitors, which demonstrates strong potential to be implemented in many real-world systems.

Highlights

  • In the field of face recognition, many different dimensional recognition approaches have been developed in recent times [1,2,3]

  • In order to have a reliable and powerful comparison, the efficiency of the proposed supervised kernel locality-based discriminant neighborhood embedding” (SKLDNE) technique was compared with Principle component analysis (PCA), kernel principal component analysis (KPCA), Linear discriminant analysis (LDA), unsupervised discriminant projection (UDP), locality preserving projections (LPP), discriminant neighborhood embedding (DNE), and locality-based discriminant neighborhood embedding method (LDNE) by a broad range of experiments with different publicly available face datasets, i.e., Yale face, Olivetti Research Laboratory (ORL) face, Head Pose, and Sheffield

  • SKLDNE technique was compared with PCA, KPCA, LDA, UDP, LPP, DNE, and LDNE by broad comparative experiments on six different publicly available datasets, i.e., Sheffield, Yale face, ORL

Read more

Summary

Introduction

In the field of face recognition, many different dimensional recognition approaches have been developed in recent times [1,2,3]. Dimensionality reduction is a main problem in numerous recognition techniques, caused by the great amount of data with high dimensions in many real-world utilizations [4,5,6]. Dimensionality reduction techniques can be classified into two main groups: i.e., linear and nonlinear. Principle component analysis (PCA) is one of the most famous linear methods [13,14,15]. PCA aims to retain global geometric information for data representation through enhancing the trace of the feature covariance matrix [13,16,17]. Linear discriminant analysis (LDA) is a linear technique that seeks to find out the discriminant information for data classification by enhancing the ratio between inter-class and intra-class scatters [16,18].

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.