Abstract

Many important kernel methods in the machine learning area, such as kernel principal component analysis, feature approximation, denoising, compression, and prediction require the computation of the dominant set of eigenvectors of the symmetric kernel Gram matrix. Recently, an efficient incremental approach was presented for the fast calculation of the dominant kernel eigenbasis. In this paper we propose faster algorithms for incrementally updating and downsizing the dominant kernel eigenbasis. These methods are well-suited for large scale problems since they are efficient in terms of both complexity and data management.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call