Abstract

Principal component analysis (PCA) has been proven to be an efficient method in dimensionality reduction, feature extraction and pattern recognition. Kernel principal component analysis (KPCA) can be considered as a natural nonlinear generalization of PCA, which performs linear PCA in a high dimensional space implicitly by using kernel trick. However, both conventional PCA and KPCA suffer from the deficiency of being sensitive to outliers. Existing robust KPCA has to eigen-decompose the gram matrix directly in each step and is much more computationally infeasible due to the large size of the matrix when the number of training samples is large. By extending existing robust PCA algorithm using kernel methods, we present a novel robust adaptive algorithm for calculating the kernel principal components. The proposed method not only preserves the characteristic of capturing underlying nonlinear structure of KPCA but also is robust against outliers by restraining the effect of outlying samples. Compared with existing robust KPCA methods, our method is performed without having to store the kernel matrix, which can reduce significantly the storage burden. In addition, our method shows the potential of expansibility to the incremental learning version. Experimental results on synthetic data indicate that our improved algorithm is effective and promising.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call