Abstract

Generalized Zero-Shot Learning (GZSL) is a challenging task. Although no visual samples of unseen classes are provided during training, the classifier must learn to recognize all classes (i.e. both seen and unseen classes). Due to the ability to generate unseen classes samples, generative models have been widely used in GZSL. However, these generative models only learn from the seen classes, so the discriminability of the unseen class features they generate is usually poor, resulting in low unseen class classification accuracy. To solve this problem, this paper proposes a novel semantic-related feature generative (SRFG) model to improve visual-semantic consistency and alleviate seen-unseen bias effectively. SRFG can generate any number of semantic-related discriminative features for both seen and unseen classes. Extensive experiments on four benchmark datasets show that the proposed model significantly outperforms the state of the arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call