Abstract

Multi-Label Image Classification (MLIC) is a fundamental yet challenging task which aims to recognize multiple labels from given images. The key to solve MLIC lies in how to accurately model the correlation between labels. Recent studies often adopt Graph Convolutional Network (GCN) to model label dependencies with word embeddings as prior knowledge. However, classical word embeddings typically contain redundant information due to the imperfect distributional hypothesis it relies on, which may degrade model generalizability. To tackle this problem, we propose a novel deep learning framework termed Visual-Semantic based Graph Convolutional Network (VSGCN), which alleviates the negative impact of redundant information by utilizing heterogeneous sources of prior knowledge. Specifically, we construct both visual prototype and semantic prototype for each label as heterogeneous prior label representations, which are further mapped to multi-label classifiers via two Multi-Head GCNs separately. The Multi-Head GCN mechanism proposed in this paper aims to guide the information propagation between prototypes for each label, which constructs multiple correlation graphs to simultaneously model the label correlation in different subspaces. Notably, we alleviate the negative influence of needless information by decreasing the inconsistency of predictions that come from visual space and semantic space. Extensive experiments conducted on various multi-label image datasets demonstrate the superiority of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call