Abstract

Recently, there has been a lot of interest in the underlying sparse representation structure in high-dimensional data such as face images. In this paper, we propose two novel efficient dimensionality reduction methods named Fast Sparsity Preserving Projections (FSPP) and Fast Fisher Sparsity Preserving Projections (FFSPP), respectively, which aim to preserve the sparse representation structure in high-dimensional data. Unlike the existing Sparsity Preserving Projections (SPP), where the sparse representation structure is learned through resolving n (the number of samples) time-consuming \( \ell^{ 1} \) norm optimization problems, FSPP constructs a dictionary through classwise PCA decompositions and learns the sparse representation structure under the constructed dictionary through matrix–vector multiplications, which is much more computationally tractable. FFSPP takes into consideration both the sparse representation structure and the discriminating efficiency by adding the Fisher constraint to the FSPP formulation to improve FSPP’s discriminating ability. Both of the proposed methods can boil down to a generalized eigenvalue problem. Experimental results on three publicly available face data sets (Yale, Extended Yale B and ORL), and a standard document collection (Reuters-21578) validate the feasibility and effectiveness of the proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call