Abstract

The aim of metric learning is to learn a mapping relationship, which reduces the intraclass distance and increases the interclass distance. As there are a large number of parameters that need to be optimized in traditional metric learning algorithms, they may suffer from high computational complexity and overfitting problems in the case of insufficient training data. To alleviate this, we propose a weakly supervised compositional metric learning (WSCML) method, which utilizes a set of predetermined local discriminant metrics to learn the optimal weight combination of the component metrics. Under the large margin framework, our WSCML can effectively improve the verification accuracy by constraining the Mahalanobis distance of positive sample pairs to be less than a small threshold and that of negative sample pairs to be greater than a large threshold. In addition, we employ three regularization terms to optimize the proposed WSCML, respectively, to control the sparsity of the solution while maintaining its feasibility. Experimental results on KinFaceW-I, fine-grained face verification (FGFV), and Labled Faces in the Wild (LFW) datasets show the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call