Abstract

Dimensionality reduction is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in dimensionality reduction. In this paper, a novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP), is proposed. SPDP, which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation. Specifically, SPDP first creates a concatenated dictionary by classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least square method. Secondly, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDP integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the feasibility and effectiveness of the proposed approach.

Highlights

  • In many fields such as object recognition [1, 2], text categorization [3], and information retrieval [4], the data are usually provided in high-dimensional form; this makes it difficult to describe, understand, and recognize these data

  • The proposed method is transformed into a generalized eigenvalue problem

  • Both Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are based on the hypothesis that samples from each class lie on a linear subspace [14, 15]; that is, neither of them can identify the local submanifold structure hidden in high-dimensional data

Read more

Summary

Introduction

In many fields such as object recognition [1, 2], text categorization [3], and information retrieval [4], the data are usually provided in high-dimensional form; this makes it difficult to describe, understand, and recognize these data. LDA can extract at best K − 1 features (K is the number of categories), which is unacceptable in many situations Both PCA and LDA are based on the hypothesis that samples from each class lie on a linear subspace [14, 15]; that is, neither of them can identify the local submanifold structure hidden in high-dimensional data. These sparse learning algorithms provide superior recognition accuracy compared with the conditional methods All these dimensionality reduction methods based on sparse coding mentioned above are required to solve the l1 norm minimization problem to construct the sparse weight matrix. (1) SPDP is a supervised dimensionality reduction method that attempts to identify a discriminating subspace where the sparse representation structure of the data and the label information are maintained. SPP does not exploit the prior knowledge of class information, which is valuable for classification and recognition problems such as face recognition

Sparsity Preserving Discriminative Learning
Experiments
Methods
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call