Abstract

Currently, deep metric learning (DML) methods typically rely on class labels to keep positive samples as closely clustered as possible, while distancing them from negative samples. However, this approach tends to disregard some vital information inherent in the data, such as intra-class information variation, which can hinder the generalization of the trained model. To address this issue, we propose an online batch diffusion-based self-distillation method (OBD-SD), which consists of a progressive self-distillation (PSD) technique and an online batch diffusion process (OBDP). Specifically, PSD is a simple yet effective self-distillation technique that is beneficial for the diversity of embedding representations. OBDP uses the diffusion process to reveal the intrinsic relationships among samples in the mini-batch and produce better soft distance targets. Our approach combines PSD with OBDP, which can extract richer relational information among samples by capturing the local geometric structure of manifolds in each mini-batch. OBD-SD is a highly flexible framework that can be integrated into state-of-the-art DML methods. Our experimental results on CUB200, CARS196, and Stanford Online Products datasets demonstrate that our OBD-SD consistently enhances the performance of existing DML methods, without requiring additional training time, resulting in competitive results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.