Abstract
In multi-view multi-label learning, each object is represented by multiple data views, and belongs to multiple class labels simultaneously. Generally, all the data views have a contribution to the multi-label learning task, but their contributions are different. Besides, for each data view, each class label is only associated with a subset data features, and different features have different contributions to each class label. In this paper, we propose a novel framework VLSF for multi-view multi-label learning, i.e., multi-view multi-label learning with View-Label-Specific Features. Specifically, we first learn a low dimensional label-specific data representation for each data view and construct a multi-label classification model based on it by exploiting label correlations and view consensus, and learn the contribution weight of each data view to multi-label learning task for all the class labels jointly. Then, the final prediction can be made by combing the prediction results of all the classifiers and the learned contribution weights. The extensive comparison experiments with the state-of-the-art approaches manifest the effectiveness of the proposed method VLSF.
Highlights
Multi-label learning [1]–[4] deals with the problem that an object is represented by a single instance and associated with multiple class labels simultaneously, where the class labels may have correlations with each other
Huang et al.: Multi-View Multi-Label Learning With VLSFs TABLE 1
The proposed method VLSF achieves a better performance than all the comparing algorithms over the twelve data sets overall, which manifests the effectiveness of VLSF on solving multi-view multi-label learning tasks by learning view-label-specific features
Summary
Multi-label learning [1]–[4] deals with the problem that an object is represented by a single instance and associated with multiple class labels simultaneously, where the class labels may have correlations with each other. 2) Multi-label classification model is constructed based on each label-specific data representation, and appropriate view contribution weights are learned for different data views simultaneously. LSML [6] jointly performs missing label set recovery and label-specific features learning for multi-label classification with missing labels As aforementioned, these approaches can be applied to solve multi-view multi-label learning problems directly, but with the limitation to exploit the complementary and consensus between different data views. LEARNING VIEW-LABEL-SPECIFIC FEATURES The learning framework of our proposed method VLSF is shown, it is mainly composed of three steps: learning low dimensional view-label-specific data representations for each data view and constructing multi-label classifiers, learning the view contribution weights, and fusion the classification results of all the classifiers. These features are named as the viewlabel-specific features for the i-th class label in the v-th data view
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have