The subspace method based on principal component analysis (PCA) is a useful tool for classifying and representing patterns. In order to learn the dynamical nonlinear structure of data, there is currently much discussion about kernel subspace methods based on kernel PCA. By using a nonlinear transformation, we can efficiently capture the nonlinear structure of data, which leads to effective subspace construction for data having a dynamical nonlinear structure. However, the performance of a kernel subspace method becomes dramatically worse in the presence of noisy features, because the method is based on a fully dense loading matrix in the constructed subspace. This implies that the subspace for each class is constructed by using not only crucial features but also noisy features, which significantly disturb the PCA procedures. To resolve this issue, we propose a sparse kernel subspace method based on sparse PCA. In order to construct subspaces based only on crucial features, we incorporate sparsity into the eigenvector estimation procedures in the nonlinear feature space. We then construct the subspaces of nonlinear features based on the sparse eigenvectors, and classify the unknown vector by projecting it into the constructed subspace. The proposed method can effectively not only capture nonlinear structures of data but also estimate eigenvectors even in the presence of noisy features. The effectiveness of the proposed method leads to the high-precision of the classification. Monte Carlo simulations and a real data analysis show that the proposed sparse kernel subspace method provides effective results for classification.
Read full abstract