Abstract

Deep metric learning aims to construct an embedding space where samples belonging to the same class are closely grouped together, while samples from different classes are widely separated. Many existing deep metric learning methods focus on maximizing the difference between inter-class features, and semantic related information is obtained by increasing the distance between samples of different classes in the embedding space. However, by compressing all positive samples together and creating large margins between different classes, the local structure between similar samples is inadvertently disrupted. This disregard for the intra-class variance presents in the local structure results in an embedding space that exhibits lower generalizability when faced with unseen classes. Consequently, the network tends to overfit the training set and performs poorly on the test set, potentially leading to crashes during evaluation. Taking this into account, this paper introduces a self-supervised generative assisted ranking framework that offers a semi-supervised perspective on learning intra-class variance within traditional supervised deep metric learning. Specifically, this paper employs various intensities and diversities to synthesize samples based on positive examples, aiming to simulate the complex variations between similar samples. Then, an intra-class ranking loss function is devised based on the principles of self-supervised learning. This loss function constrains the ordering relationship of synthesized samples according to their generation intensity, enabling the network to capture subtle intra-class differences and maintain the intra-class distribution. With the adoption of this approach, a more realistic embedding space can be achieved, preserving both the global and local structures of samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call