Abstract

Generalized zero-shot learning (GZSL) for image classification is a challenging task since not only training examples from novel classes are absent, but also classification performance is judged on both seen and unseen classes. This setting is vital in realistic scenarios where the vast labeled data are not easily available. Some existing methods for GZSL utilize latent features learned through variational autoencoder (VAE) for recognizing novel classes, while few have solved the problem that image features have large intra-class variance affecting the quality of latent features. Hence we propose to match the soul samples to shorten the variance regularized by the pre-trained classifiers, which enables the VAE to generate much more discriminative latent features to train the softmax classifier. We evaluate our method on four benchmark datasets, i.e. CUB, SUN, AWAI, AWA2, and experimental results demonstrate that our model achieves the new state-of-the-art in generalized zero-shot and few-shot learning settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call