Abstract

To exploit the information from two-dimensional structured data, two-dimensional principal component analysis (2-DPCA) has been widely used for dimensionality reduction and feature extraction. However, 2-DPCA is sensitive to outliers which are common in real applications. Therefore, many robust 2-DPCA methods have been proposed to improve the robustness of 2-DPCA. But existing robust 2-DPCAs have several weaknesses. First, these methods cannot be robust enough to outliers. Second, to center a sample set mixed with outliers using the L2-norm distance is usually biased. Third, most methods do not preserve the nice property of 2-DPCA (rotational invariance), which is important for learning algorithm. To alleviate these issues, we present a generalized robust 2-DPCA, which is named as 2-DPCA with $$\ell _{2,p}$$ -norm minimization ( $$\ell _{2,p}$$ -2-DPCA), for image representation and recognition. In $$\ell _{2,p}$$ -2-DPCA, $$\ell _{2,p}$$ -norm is employed as the distance metric to measure the reconstruction error, which can alleviate the effect of outliers. Therefore, the proposed method is robust to outliers and preserves the desirable property of 2-DPCA which is invariant to rotational and well characterizes the geometric structure of samples. Moreover, most existing robust PCA methods estimate sample mean from database with outliers by averaging, which is usually biased. Sample mean are treated as an unknown variable to remedy the bias of computing sample mean in $$\ell _{2,p}$$ -2-DPCA. To solve $$\ell _{2,p}$$ -2-DPCA, we propose an iterative algorithm, which has a closed-form solution in each iteration. Experimental results on several benchmark databases demonstrate the effectiveness and advantages of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call