Abstract
The covariance matrix in the Mahalanobis distance can be trained by semi-definite programming, but training for a large size data set is inefficient. In this paper, we constrain the covariance matrix to be diagonal and train Mahalanobis kernels by linear programming (LP). Training can be formulated by ν-LP SVMs (support vector machines) or regular LP SVMs. We clarify the dependence of the solutions on the margin parameter. If a problem is not separable, a zero-margin solution, which does not appear in the LP SVM, appears in the ν-LP SVM. Therefore, we use the LP SVM for kernel training. Using the benchmark data sets we show that the proposed method gives better generalization ability than RBF (radial basis function) kernels and Mahalanobis kernels calculated using the training data and has a good capability of selecting input variables especially for a large number of input variables.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.