Abstract

• We propose a cross-modal propagation network (called CMPN) based on meta-learning for generalized zero-shot learning. • CMPN incorporates the adaptive graph construction and label propagation into the generative ZSL framework. • CMPN guarantees intra-class compactness and inter-class separation in the latent space. • The experiment results validate the effectiveness of CMPN. Zero-shot learning (ZSL) aims to recognize unseen classes by transferring semantic knowledge from seen classes to unseen ones. Since only seen classes are available during training, the domain bias issue, i.e., the trained model is biased toward seen classes, is the key issue for ZSL. To alleviate the bias problem, generation-based approaches are proposed to build generative models that can generate fake visual features of unseen classes by utilizing semantic vectors . However, most of the existing generative methods still suffer some degree of domain bias caused by the ambiguous generation of fake features. In this paper, we propose a cross-modal propagation network (CMPN), which adopts an episode-based meta-learning strategy. CMPN incorporates the adaptive graph construction and label propagation into the generative ZSL framework for guaranteeing an unambiguous and discriminative fake feature generating. By further leveraging the manifold structure of different modalities in the latent space, CMPN can implicitly ensure intra-class compactness and inter-class separation through label propagation classification in latent space. Extensive experiments on four datasets validate the effectiveness of CMPN under both ZSL and generalized ZSL (GZSL) settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call