Abstract

Traditional bidirectional two-dimension (2D) principal component analysis ((2D)PCA-L2) is sensitive to outliers because its objective function is the least squares criterion based on L2-norm. This paper proposes a simple but effective L1-norm-based bidirectional 2D principal component analysis ((2D)PCA-L1), which jointly takes advantage of the merits of bidirectional 2D subspace learning and L1-normbased distance criterion. Experimental results on two popular face databases show that the proposed method is more robust to outliers than several methods based on principal component analysis in the fields of data compression and object recognition. Keywords-bidirectional two-dimension principal component analysis; l2-norm; outliers; L1-norm; Optimization

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call