Abstract

Zero-shot learning has received great interest in visual recognition community. It aims to classify new unobserved classes based on the model learned from observed classes. Most zero-shot learning methods require pre-provided semantic attributes as the mid-level information to discover the intrinsic relationship between observed and unobserved categories. However, it is impractical to annotate the enriched label information of the observed objects in real-world applications, which would extremely hurt the performance of zero-shot learning with limited labeled seen data. To overcome this obstacle, we develop a Low-rank Semantics Grouping (LSG) method for zero-shot learning in a semi-supervised fashion, which attempts to jointly uncover the intrinsic relationship across visual and semantic information and recover the missing label information from seen classes. Specifically, the visual-semantic encoder is utilized as projection model, low-rank semantic grouping scheme is explored to capture the intrinsic attributes correlations and a Laplacian graph is constructed from the visual features to guide the label propagation from labeled instances to unlabeled ones. Experiments have been conducted on several standard zero-shot learning benchmarks, which demonstrate the efficiency of the proposed method by comparing with state-of-the-art methods. Our model is robust to different levels of missing label settings. Also visualized results prove that the LSG can distinguish the test unseen classes more discriminative.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call