Abstract

Generalized zero-shot classification is a challenging task to recognize test data which may come from seen or unseen classes. Existing methods suffer from the bias problem that the unseen images are easy to be misclassified to seen classes. Generating some fake unseen samples by Generative Adversarial Network has been a popular method. However, these models are not easy to train. In this paper, we proposed a method by learning domain invariant unseen features for generalized zero-shot classification. Specifically, we learn the support seen class set for each unseen class for transferring knowledge from source to target domain. The unseen samples of each class are generated based on the combinations of the samples from its support seen class set. In addition, for dealing with the domain shift problem between source and target domains, we learn domain invariant unseen features by minimizing the Maximum Mean Discrepancy distance of seen data, generated unseen data and then project target data to the common space. For dealing with the bias problem, we select some confident target unseen samples to augment training samples for training the classifier. In experiments, we demonstrate that the proposed method significantly outperforms other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call