Abstract

The usefulness of metric learning in image classification has been proven and has attracted increasing attention in recent research. In conventional metric learning, it is assumed that the source and target instances are distributed identically, however, real-world problems may not have such an assumption. Therefore, for better classifying, we need abundant labeled images, which are inaccessible due to the high cost of labeling. In this way, the knowledge transfer could be utilized. In this paper, we present a metric transfer learning approach entitled as “Metric Transfer Learning via Geometric Knowledge Embedding (MTL-GKE)” to actuate metric learning in transfer learning. Specifically, we learn two projection matrices for each domain to project the source and target domains to a new feature space. In the new shared sub-space, Mahalanobis distance metric is learned to maximize inter-class and minimize intra-class distances in target domain, while a novel instance reweighting scheme based on the graph optimization is applied, simultaneously, to employ the weights of source samples for distribution matching. The results of different experiments on several datasets on object and handwriting recognition tasks indicate the effectiveness of the proposed MTL-GKE compared to other state-of-the-arts methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.