Abstract

In a real-world scenario, an object could contain multiple tags instead of a single categorical label. To this end, multi-label learning (MLL) emerged. In MLL, the feature distributions are long-tailed and the complex semantic label relation and the long-tailed training samples are the main challenges. Semi-supervised learning is a potential solution. While, existing methods are mainly designed for single class scenario while ignoring the latent label relations. In addition, they cannot well handle the distribution shift commonly existing across source and target domains. To this end, a Semi-supervised Dual Relation Learning (SDRL) framework for multi-label classification is proposed. SDRL utilizes a few labeled samples as well as large scale unlabeled samples in the training stage. It jointly explores the inter-instance feature-level relation and the intra-instance label-level relation even from the unlabeled samples. In our model, a dual-classifier structure is deployed to obtain domain invariant representations. The prediction results from the classifiers are further compared and the most confident predictions are extracted as pseudo labels. A trainable label relation tensor is designed to explicitly explore the pairwise latent label relations and refine the predicted labels. SDRL is able to effectively and efficiently explore the feature-label relation as well as the label-label relation knowledge without any extra semantic knowledge. We evaluated SDRL in general and zero-shot multi-label classification tasks and we concluded that SDRL is superior to other SOTA baselines. Furthermore, extensive ablation studies have been done which reveal the effectiveness of each component in our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call