Abstract

Multi-label image classification aims to predict a set of labels that are present in an image. The key challenge of multi-label image classification lies in two aspects: modeling label correlations and utilizing spatial information. However, the existing approaches mainly calculate the correlation between labels according to co-occurrence among them. While the result is easily affected by the label noise and occasional co-occurrences. In addition, some works try to model the correlation between labels and spatial features, but the correlation among labels is not fully considered to model the spatial relationships among features. To address the above issues, we propose a novel cross-modality semantic guidance-based framework for multi-label image classification, namely CMSG. First, we design a semantic-guided attention (SGA) module, which applies the label correlation matrix to guide the learning of class-specific features, which implicitly models semantic correlations among labels. Second, we design a spatial-aware attention (SAA) module to extract high-level semantic-aware spatial features based on class-specific features obtained from the SGA module. The experiments carried out on three benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art algorithms on multi-label image classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call