Abstract

Multi-view semi-supervised learning achieves great success in recent years which leverages information among labeled and unlabeled multi-view data to improve the generalization performance. So far two classical two-view semi-supervised learning methods are multi-view Laplacian support vector machines (MvLapSVM) and multi-view Laplacian twin support vector machines (MvLapTSVM). But they can only deal with two-view classification problems and cannot deal with general multi-view classification problems. They both solve the quadratic programming problems (QPPs) so that their time complexity is quite high. In this paper, we formulate general multi-view Laplacian least squares support vector machines (GMvLapSVM) and general multi-view Laplacian least squares twin support vector machines (GMvLapTSVM) which solve linear equations as compared to QPPs in MvLapSVM and MvLapTSVM. They can handle the general multi-view classification problems by combining multiple different views in a non-pairwise way. The disagreement among different views is considered as a regularization term in the objective function to explore the consensus information. Multi-manifold regularization is adopted for multi-view semi-supervised learning. Combination weights for all view in the norm regularization terms are adopted to exploit complementarity information among distinct views. Finally, an efficient alternating algorithm is proposed for optimization. Experiments are performed on various real world datasets, which give state-of-the-art generalization performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call