Abstract

Zero-shot Learning (ZSL) aims to recognize novel classes through seen knowledge. The canonical approach to ZSL leverages a visual-to-semantic embedding to map the global features of an image sample to its semantic representation. These global features usually overlook the fine-grained information which is vital for knowledge transfer between seen and unseen classes, rendering these features sub-optimal for ZSL task, especially the more realistic Generalized Zero-shot Learning (GZSL) task where global features of similar classes could hardly be separated. To provide a remedy to this problem, we propose Language-Augmented Pixel Embedding (LAPE) that directly bridges the visual and semantic spaces in a pixel-based manner. To this end, we map the local features of each pixel to different attributes and then extract each semantic attribute from the corresponding pixel. However, the lack of pixel-level annotation conduces to an inefficient pixel-based knowledge transfer. To mitigate this dilemma, we adopt the text information of each attribute to augment the local features of image pixels which are related to the semantic attributes. Experiments on four ZSL benchmarks demonstrate that LAPE outperforms current state-of-the-art methods. Comprehensive ablation studies and analyses are provided to dissect what factors lead to this success.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call