Abstract

There are tremendous object categories in the real world besides those in image datasets. Zero-shot learning aims to recognize image categories which are unseen in the training set. A large number of previous zero-shot learning models use word vectors of the class labels directly as category prototypes in the semantic embedding space. But word vectors cannot obtain the global knowledge of an image category sufficiently. In this paper, we propose a new encyclopedia enhanced semantic embedding model to promote the discriminative capability of word vector prototypes with the global knowledge of each image category. The proposed model extracts the TF-IDF key words from encyclopedia articles to acquire the global knowledge of each category. The convex combination of the key words' word vectors acts as the prototypes of the object categories. The prototypes of seen and unseen classes build up the embedding space where the nearest neighbour search is implemented to recognize the unseen images. The experiments show that the proposed method achieves the state-of-the-art performance on the challenging ImageNet Fall 2011 1k2hop dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call