Abstract

Recently, robust-norm distance related principal component analysis (PCA) for feature extraction has been shown to be very effective for image analysis, which considers either minimization of reconstruction error or maximization of data variance in low-dimensional subspace. However, both of them are important for feature extraction. Furthermore, most of existing methods cannot obtain satisfactory results due to the utilization of inflexible robust norm for distance metric. To address these problems, this paper proposes a novel robust PCA formulation called Double L2,p-norm based PCA (DLPCA) for feature extraction, in which the minimization of reconstruction error and the maximization of variance are simultaneously taken into account in a unified framework. In the reconstruction error function, we target to learn a latent subspace to bridge the relationship between the transformed features and the original features. To guarantee the objective to be insensitive to outliers, we take L2,p-norm as the distance metric for both reconstruction error and data variance. These characteristics make our method more applicable for feature extraction. We present an effective iterative algorithm to obtain the solution of this challenging work, and conduct theoretical analysis on the convergence of the algorithm. The experimental results on several databases show the effectiveness of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call