Multi-view learning seeks to leverage the advantages of various views to complement each other and make full use of the latent information in the data. Nevertheless, effectively exploring and utilizing common and complementary information across diverse views remains challenging. In this paper, we propose two multi-view classifiers: multi-view support vector machine via L0/1 soft-margin loss (MvL0/1-SVM) and structural MvL0/1-SVM (MvSL0/1-SVM). The key difference between them is that MvSL0/1-SVM additionally fuses structural information, which simultaneously satisfies the consensus and complementarity principles. Despite the discrete nature inherent in the L0/1 soft-margin loss, we successfully establish the optimality theory for MvSL0/1-SVM. This includes demonstrating the existence of optimal solutions and elucidating their relationships with P-stationary points. Drawing inspiration from the P-stationary point optimality condition, we design and integrate a working set strategy into the proximal alternating direction method of multipliers. This integration significantly enhances the overall computational speed and diminishes the number of support vectors. Last but not least, numerical experiments show that our suggested models perform exceptionally well and have faster computational speed, affirming the rationality and effectiveness of our methods.
Read full abstract