Abstract

Zero-shot learning (ZSL) aims to recognize unseen image classes without requiring any training samples of these specific classes. The ZSL problem is typically achieved by building up a semantic embedding space like attributes to bridge the visual features and class labels of images. Currently, most ZSL approaches focus on learning a visual-semantic alignment from seen classes using only the human-designed attributes, and then ZSL problem is solved by transferring semantic knowledge from seen classes to the unseen classes. However, few works indicate if the human-designed attributes are discriminative enough for image class prediction. To address this issue, we propose a semantic-aware dictionary learning (SADL) framework to explore these discriminative visual attributes across seen and unseen classes. Furthermore, the semantic cues are elegantly integrated into the feature representations via learned visual attributes for recognition task. Experiments conducted on two challenging benchmark datasets show that our approach outweighs other state-of-the-art ZSL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call