Abstract

The task of multi-label image recognition is to predict a set of object labels that present in an image. As objects normally co-occur in an image, it is desirable to model the label dependencies to improve the recognition performance. To capture and explore such important information, we propose graph convolutional networks (GCNs) based models for multi-label image recognition, where directed graphs are constructed over classes and information is propagated between classes to learn inter-dependent class-level representations. Following this idea, we design two particular models that approach multi-label classification from different views. In our first model, the prior knowledge about the class dependencies is integrated into classifier learning. Specifically, we propose Classifier Learning GCN (C-GCN) to map class-level semantic representations (e.g., word embeddings) into classifiers that maintain the inter-class topology. In our second model, we decompose the visual representation of an image into a set of label-aware features and propose prediction learning GCN (P-GCN) to encode such features into inter-dependent image-level prediction scores. Furthermore, we also present an effective correlation matrix construction approach to capture inter-class relationships and consequently guide information propagation among classes. Empirical results on generic multi-label image recognition demonstrate that both of the proposed models can obviously outperform other existing state-of-the-arts. Moreover, the proposed methods also show advantages in some other multi-label classification related applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call