Abstract

Distance metric learning (DML) has received increasing attention in recent years. In this paper, we propose a constrained empirical risk minimization framework for DML. This framework enriches the state-of-the-art studies on both theoretic and algorithmic aspects. Theoretically, we comprehensively analyze the generalization by bounding the sample and the approximation errors with respect to the best model. Algorithmically, we carefully derive an optimal gradient descent by using Nesterov's method, and provide two example algorithms that utilize the logarithmic loss and the smoothed hinge loss, respectively. We evaluate the new framework on data classification and image retrieval experiments. Results show that the new framework has competitive performance compared with the representative DML algorithms, including Xing's method, large margin nearest neighbor classifier, neighborhood component analysis, and regularized metric learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call