Multi-view learning (MVL) is a promising data fusion technique based on the principles of consensus and complementarity. Despite significant advancements in this field, several challenges persist. First, scalability remains an issue, as many existing approaches are limited to two-view scenarios, making them difficult to extend to more complex multi-view settings. Second, implementing consensus principles in current techniques often requires adding extra terms to the model’s objective function or constraints, leading to increased complexity. Additionally, when applying complementarity principles, most studies focus on pairwise interactions between views, overlooking the benefits of deeper and broader multi-view interactions. To address these challenges, this paper proposes the multi-view interactive knowledge transfer (MVIKT) model, which enhances scalability by effectively managing interactions across multiple views, thereby overcoming the limitations of traditional two-view models. More importantly, MVIKT introduces a novel interactive knowledge transfer strategy that simplifies the application of the consensus principle by eliminating the need for additional terms. By treating margin distances as transferable knowledge and facilitating multiple rounds of interaction, MVIKT uncovers deeper complementary information, thereby improving the overall effectiveness of MVL. Theoretical analysis further supports the MVIKT model, demonstrating that transferring knowledge through margin distance is capable of lowering the upper bound of the generalization error. Extensive experiments across diverse datasets validate MVIKT’s superiority, showing statistically significant improvements over benchmark methods.
Read full abstract