Abstract

Zero-shot sketch-based image retrieval is challenging for the modal gap between distributions of sketches and images and the inconsistency of label spaces during training and testing. Previous methods mitigate the modal gap by projecting sketches and images into a joint embedding space. Most of them also bridge seen and unseen classes by leveraging semantic embeddings, i.e., word vectors and hierarchical similarities. In this paper, we propose Relationship-Preserving Knowledge Distillation (RPKD) to study generalizable embeddings from the perspective of knowledge distillation bypassing the usage of semantic embeddings. In particular, we firstly distill the instance-level knowledge to preserve inter-class relationships without semantic similarities that require extra effort to collect. We also reconcile the contrastive relationships among instances between different embedding spaces, which is complementary to instance-level relationships. Furthermore, embedding-induced supervision, which measures the similarities of an instance to partial class embedding centers from the teacher, is developed to align the student's classification confidences. Extensive experiments conducted on three benchmark ZS-SBIR datasets, i.e., Sketchy, TU-Berlin, and QuickDraw, demonstrate the superiority of our proposed RPKD approach comparing to the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.