Abstract

Given a set of labeled data with semantic descriptions, zero-shot learning aims at recognizing objects from unseen classes, where no instances of the classes are used during training. Most existing methods solve this problem via embedding images and labels into an embedding space and computing similarity across different information sources. However, the similarity calculation could be unreliable when the training and testing data distributions are inconsistent. In this paper, we propose a novel zero-shot learning model that forms a neighborhood-preserving structure in the semantic embedding space and utilize it to predict classifiers for unseen classes. By constructing a locally connected graph for class embeddings, we exploit the structural constraint of embeddings of similar classes and retain the global structure in the semantic embedding space to obtain an effective representation of semantic information. Experiment results on three benchmark datasets demonstrate that the proposed method generates effective semantic representations and out-performs state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call