Abstract

Deep metric learning aims to learn a non-linear function that maps raw-data to a discriminative lower-dimensional embedding space, where semantically similar samples have larger similarity than dissimilar ones. Most existing approaches process each raw-data in two steps, by mapping the raw-data to a higher-dimensional feature space via a fixed backbone, followed by mapping the higher-dimensional feature space to a lower-dimensional embedding space via a linear layer. This paradigm, however, inevitably leads to a Generalization Bottleneck (GB) problem. Specifically, GB refers to a limitation that the generalization capacity of lower-dimensional embedding space is inferior to the higher-dimensional feature space in the test stage. To mitigate the capacity gap between feature space and embedding space, we propose to introduce a fully-learnable module, dubbed Relational Knowledge Preserving (RKP), that improves the generalization capacity of lower-dimensional embedding space by transferring the mutual similarity of instances. Our proposed RKP module can be integrated into a general deep metric learning approach. And, experiments conducted on different benchmarks show that it can significantly improve the performance of original model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call