Abstract
Support vector machines are known for their high capability of generalization and have been successfully applied to various classification and regression problems by employing kernel techniques to define nonlinear feature maps from a low dimensional input space into a very high dimensional feature space. Kernel techniques have an advantage in making possible to work in the implicitly introduced feature spaces without cost of computations. However, kernel functions are exploited without specific insight into problems. Given a feature map explicitly, a kernel function can naturally be defined by the inner product between data pairs in the feature space. This paper proposes an approach to acquire optimal feature maps which realize both the linear separability and the maximization of margin by adaptive learning on training data
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have