Abstract

Zero-shot learning (ZSL) is a challenging task due to the lack of unseen class data during training. Existing works attempt to establish a mapping between the visual and class spaces through a common intermediate semantic space. The main limitation of existing methods is the strong bias towards seen class, known as the domain shift problem, which leads to unsatisfactory performance in both conventional and generalized ZSL tasks. To tackle this challenge, we propose to convert ZSL to the conventional supervised learning by generating features for unseen classes. To this end, a joint generative model that couples variational autoencoder (VAE) and generative adversarial network (GAN), called Zero-VAE-GAN, is proposed to generate high-quality unseen features. To enhance the class-level discriminability, an adversarial categorization network is incorporated into the joint framework. Besides, we propose two self-training strategies to augment unlabeled unseen features for the transductive extension of our model, addressing the domain shift problem to a large extent. Experimental results on five standard benchmarks and a large-scale dataset demonstrate the superiority of our generative model over the state-of-the-art methods for conventional, especially generalized ZSL tasks. Moreover, the further improvement of the transductive setting demonstrates the effectiveness of the proposed self-training strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call