Abstract

Generalized zero-shot learning (GZSL) is a technique to train a deep learning model to identify unseen classes. Conventionally, conditional generative models have been employed to generate training data for unseen classes from the attribute. In this paper, we propose a new conditional generative model that improves the GZSL performance greatly. In a nutshell, the proposed model, called conditional Wasserstein autoencoder (CWAE), minimizes the Wasserstein distance between the real and generated image feature distributions using an encoder-decoder architecture. From the extensive experiments on various benchmark datasets, we show that the proposed CWAE outperforms conventional generative models in terms of the GZSL classification performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call