Abstract

Classic kernel principal component analysis (KPCA) is less computationally efficient when extracting features from large data sets. In this paper, we propose an algorithm, that is, efficient KPCA (EKPCA), that enhances the computational efficiency of KPCA by using a linear combination of a small portion of training samples, referred to as basic patterns, to approximately express the KPCA feature extractor, that is, the eigenvector of the covariance matrix in the feature extraction. We show that the feature correlation (i.e., the correlation between different feature components) can be evaluated by the cosine distance between the kernel vectors, which are the column vectors in the kernel matrix. The proposed algorithm can be easily implemented. It first uses feature correlation evaluation to determine the basic patterns and then uses these to reconstruct the KPCA model, perform feature extraction, and classify the test samples. Since there are usually many fewer basic patterns than training samples, EKPCA feature extraction is much more computationally efficient than that of KPCA. Experimental results on several benchmark data sets show that EKPCA is much faster than KPCA while achieving similar classification performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.