Abstract

Metric learning has attracted a lot of interest in classification tasks due to its efficient performance. Most traditional metric learning methods are based on k-nearest neighbors (kNN) classifiers to make decisions, while the choice k affects the generalization. In this work, we propose an end-to-end metric learning framework. Specifically, a new linear metric learning (LMML) is first proposed to jointly learn adaptive metrics and the optimal classification hyperplanes, where dissimilar samples are separated by maximizing classification margin. Then a nonlinear metric learning model (called RLMML) is developed based on a bound nonlinear kernel function to extend LMML. The non-convexity of the proposed models makes them difficult to optimize. The half-quadratic optimization algorithms are developed to solve iteratively the problems, by which the optimal classification hyperplane and adaptive metric are alternatively optimized. Moreover, the resulting algorithms are proved to be convergent theoretically. Numerical experiments on different types of data sets show the effectiveness of the proposed algorithms. Finally, the Wilcoxon test shows also the feasibility and effectiveness of the proposed models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call