Abstract

The kernel method suffers from the following problem: the computational efficiency of the feature extraction procedure is inversely proportional to the size of the training sample set. In this paper, from a novel viewpoint, we propose a very simple and mathematically tractable method to produce the computationally efficient kernel-method-based feature extraction procedure. We first address the issue that how to make the feature extraction result of the reformulated kernel method well approximate that of the naive kernel method. We identify these training samples that statistically contribute much to the feature extraction results and exploit them to reformulate the kernel method to produce the computationally efficient kernel-method-based feature extraction procedure. Indeed, the proposed method has the following basic idea: when one training sample has little effect on the feature extraction result and statistically has the high correlation with regard to all the training samples, the feature extraction term associated with this training sample can be removed from the feature extraction procedure. The proposed method has the following advantages: First, it proposes, for the first time, to improve the kernel method through formal and reasonable evaluation on the feature extraction term. Second, the proposed method improves the kernel method at a low extra cost and thus has a much more computationally efficient training phase than most of the previous improvements to the kernel method. The experimental comparison shows that the proposed method performs well in classification problems. This paper also intuitively shows the geometrical relation between the identified training samples and other training samples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call