Abstract

Using the kernel trick idea and the kernels-as-features idea, we can construct two kinds of nonlinear feature spaces, where linear feature extraction algorithms can be employed to extract nonlinear features. In this correspondence, we study the relationship between the two kernel ideas applied to certain feature extraction algorithms such as linear discriminant analysis, principal component analysis, and canonical correlation analysis. We provide a rigorous theoretical analysis and show that they are equivalent up to different scalings on each feature. These results provide a better understanding of the kernel method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.