Abstract

Zero-shot learning (ZSL) aims to learn models that can recognize images of semantically related unseen categories, through transferring attribute-based knowledge learned from training data of seen classes to unseen testing data. As visual attributes play a vital role in ZSL, recent embedding-based methods usually focus on learning a compatibility function between the visual representation and the class semantic attributes. While in this work, in addition to simply learning the region embedding of different semantic attributes to maintain the generalization capability of the learned model, we further consider to improve the discrimination power of the learned visual features themselves by contrastive embedding. It exploits both the class-wise and instance-wise supervision for GZSL, under the attribute guided weakly supervised representation learning framework. To further improve the robustness of the ZSL model, we also propose to train the model under the consistency regularization constraint, through taking full advantages of self-supervised signals of the image under various perturbed augmentation situations, which could make the model robust to some occluded or un-related attribute regions. Extensive experimental results demonstrate the effectiveness of the proposed ZSL method, achieving superior performances to state-of-the-art methods on three widely-used benchmark datasets, namely CUB, SUN, and AWA2. Our source code is released at https://github.com/KORIYN/CC-ZSL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call