Abstract

This paper presents a hardness-aware deep metric learning (HDML) framework for image clustering and retrieval. Most existing deep metric learning methods employ the hard negative mining strategy to alleviate the lack of informative samples for training. However, this mining strategy only utilizes a subset of training data, which may not be enough to characterize the global geometry of the embedding space comprehensively. To address this problem, we perform linear interpolation on embeddings to adaptively manipulate their hardness levels and generate corresponding label-preserving synthetics for recycled training so that information buried in all samples can be fully exploited and the metric is always challenged with proper difficulty. As a single synthetic for each sample may still not be enough to describe the unobserved distributions of the training data which is crucial for the generalization performance, we further extend HDML to generate multiple synthetics for each sample. We propose a randomly hardness-aware deep metric learning (HDML-R) method and an adaptively hardness-aware deep metric learning (HDML-A) method to sample multiple random and adaptive directions, respectively, for hardness-aware synthesis. Since the generated multiple synthetics might not all be useful and adaptive, we propose a synthetic selection method with three criteria for the selection of qualified synthetics that are beneficial to the training of the metric. Extensive experimental results on the widely used CUB-200-2011, Cars196, Stanford Online Products, In-Shop Clothes Retrieval, and VehicleID datasets demonstrate the effectiveness of the proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call