Multi-view multi-label learning (MVML) aims to train a model that can explore the multi-view information of the input sample to obtain its accurate predictions of multiple labels. Unfortunately, a majority of existing MVML methods are based on the assumption of data completeness, making them useless in practical applications with partially missing views or some uncertain labels. Recently, many approaches have been proposed for incomplete data, but few of them can handle the case of both missing views and labels. Moreover, these few existing works commonly ignore potentially valuable information about unknown labels or do not sufficiently explore latent label information. Therefore, in this paper, we propose a label semantic-guided contrastive learning method named LSGC for the dual incomplete multi-view multi-label classification problem. Concretely, LSGC employs deep neural networks to extract high-level features of samples. Inspired by the observation of exploiting label correlations to improve the feature discriminability, we introduce a graph convolutional network to effectively capture label semantics. Furthermore, we introduce a new sample-label contrastive loss to explore the label semantic information and enhance the feature representation learning. For missing labels, we adopt a pseudo-label filling strategy and develop a weighting mechanism to explore the confidently recovered label information. We validate the framework on five standard datasets and the experimental results show that our method achieves superior performance in comparison with the state-of-the-art methods.
Read full abstract