Abstract

Multi-view learning (MVL) is an active direction in machine learning that aims at exploiting the consensus and complementarity information among multiple distinct feature sets to boost the generalization performance of the counterpart algorithm. So far, two classical SVM-based MVL methods are SVM-2K and multi-view twin support vector machine (MvTSVM). They are designed only for two-view classification and cannot tackle the general multi-view classification problem. They also cannot effectively leverage the complementarity information among different feature views. In this paper, we propose two novel multi-view support vector machines with the consensus and complementarity information for MVL that not only can deal with the two-view classification problem but also the general multi-view classification problem by jointly learning multiple different views in a non-pairwise way. The disagreement among different views is regarded as a constraint or a regularization term in the objective function which plays an important role in exploring the consensus information. Combination weights for the reconstruction of each view in regularization terms are learned to explore complementarity information among different views. Finally, an efficient iteration algorithm with the classical convex quadratic programming is developed for optimization. Experimental results validate the effectiveness of our proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call