Abstract

Multi-view learning concentrates on leveraging the consensus and complementarity information among multiple distinct feature representations to improve the performance. Most multi-view learning models deal with two main issues. Firstly, how to fully exploit the view-agreement and view-discrepancy poses a major challenge. Secondly, how to design a general multi-view model is indispensable. By inheriting the asymmetric merit of LINEX loss, we propose a general multi-view LINEX SVM framework, which includes two models called MVLSVM-CO and MVLSVM-SIM. They can not only use LINEX loss function to flexibly distinguish the error-prone samples of both classes, but also take advantage of the consistency and the complementarity of distinct views in multi-view scenario. An iterative two-step strategy is adopted to solve the optimization problems efficiently. Furthermore, we theoretically analyze the view-consistency and generalization capability of the proposed models by using Rademacher complexity. The extensive experiments confirm the effectiveness of MVLSVM-CO and MVLSVM-SIM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call