Abstract

Zero-shot learning (ZSL) is machine learning task to recognize samples from classes that are not observed during training. Transductive ZSL (TZSL) is a more realistic and effective paradigm that leverages unlabeled unseen data during training to reduce the bias towards seen classes. However, most existing TZSL methods neglect the information gap between visual and semantic spaces, and thus fail to generate distribution-consistent unseen features. To address this issue, we propose a novel TZSL approach named Anchor-based Discriminative Dual Distribution Calibrated Feature Generative Network (AD3C-FGN), which performs anchor-based distribution calibration in both visual and semantic spaces for improving generalization ability of the model. In AD3C-FGN, we adopt conditional generative adversarial network with an unseen discriminator to construct a Y-shape generation model that mitigates the domain shift problem. Moreover, an AD3C module is elaborated for calibrating the distribution of generated and real samples in both visual and semantic spaces with real sample anchors, and also enhance the discriminability of the generated samples. AD3C enforces the generated sample to be closer to its homogenous anchor but farther away from inhomogeneous anchors in both spaces. Extensive experimental results on six popular ZSL benchmarks demonstrate that our method achieves promising performances in different settings. The source codes of our model have been released onhttps://github.com/ZYi-CQU/AD3C-FGN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call