Abstract
Multi-view learning seeks to leverage the advantages of various views to complement each other and make full use of the latent information in the data. Nevertheless, effectively exploring and utilizing common and complementary information across diverse views remains challenging. In this paper, we propose two multi-view classifiers: multi-view support vector machine via L0/1 soft-margin loss (MvL0/1-SVM) and structural MvL0/1-SVM (MvSL0/1-SVM). The key difference between them is that MvSL0/1-SVM additionally fuses structural information, which simultaneously satisfies the consensus and complementarity principles. Despite the discrete nature inherent in the L0/1 soft-margin loss, we successfully establish the optimality theory for MvSL0/1-SVM. This includes demonstrating the existence of optimal solutions and elucidating their relationships with P-stationary points. Drawing inspiration from the P-stationary point optimality condition, we design and integrate a working set strategy into the proximal alternating direction method of multipliers. This integration significantly enhances the overall computational speed and diminishes the number of support vectors. Last but not least, numerical experiments show that our suggested models perform exceptionally well and have faster computational speed, affirming the rationality and effectiveness of our methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.