Abstract

In the task of multiview multilabel (MVML) classification, each instance is represented by several heterogeneous features and associated with multiple semantic labels. Existing MVML methods mainly focus on leveraging the shared subspace to comprehensively explore multiview consensus information across different views, while it is still an open problem whether such shared subspace representation is effective to characterize all relevant labels when formulating a desired MVML model. In this article, we propose a novel label-driven view-specific fusion MVML method named L-VSM, which bypasses seeking for a shared subspace representation and instead directly encodes the feature representation of each individual view to contribute to the final multilabel classifier induction. Specifically, we first design a label-driven feature graph construction strategy and construct all instances under various feature representations into the corresponding feature graphs. Then, these view-specific feature graphs are integrated into a unified graph by linking the different feature representations within each instance. Afterward, we adopt a graph attention mechanism to aggregate and update all feature nodes on the unified graph to generate structural representations for each instance, where both intraview correlations and interview alignments are jointly encoded to discover the underlying consensuses and complementarities across different views. Moreover, to explore the widespread label correlations in multilabel learning (MLL), the transformer architecture is introduced to construct a dynamic semantic-aware label graph and accordingly generate structural semantic representations for each specific class. Finally, we derive an instance-label affinity score for each instance by averaging the affinity scores of its different feature representations with the multilabel soft margin loss. Extensive experiments on various MVML applications have verified that our proposed L-VSM has achieved superior performance against state-of-the-art methods. The codes are available at https://gengyulyu.github.io/homepage/assets/codes/LVSM.zip.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.