Abstract

In this paper, we derive an efficient nonlinear feature extraction method from naive Kernel Minimum Squared Error (KMSE) method. The most contribution of the derived method is its feature extraction procedure that is much more computationally efficient than naive KMSE. Differing from naive KMSE that exploits some linear combination of the total training patterns to express the discriminant vector in feature space, the derived method attempts to select out a small number of patterns (referred to as ¿significant nodes¿ in this paper) from the training set and exploits some linear combination of ¿significant nodes¿ to approximate to the discriminant vector in feature space. According to the following two principles, an algorithm for producing ¿significant nodes¿ is designed. The ¿significant node¿ set should well represent the whole training patterns, and each ¿significant node¿ should contribute much for the feature extraction result. Experimental results on several benchmark datasets illustrate our method can efficiently classify the real-world data with the high recognition accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.