Abstract

Margin distribution has been proven to play a crucial role in improving generalization ability. In recent studies, many methods are designed using large margin distribution machine (LDM), which combines margin distribution with support vector machine (SVM), such that a better performance can be achieved. However, these methods are usually proposed based on single-view data and ignore the connection between different views. In this article, we propose a new multiview margin distribution model, called MVLDM, which constructs both multiview margin mean and variance. Besides, a framework is proposed to achieve multiview learning (MVL). MVLDM provides a new way to explore the utilization of complementary information in MVL from the perspective of margin distribution and satisfies both the consistency principle and the complementarity principle. In the theoretical analysis, we used Rademacher complexity theory to analyze the consistency error bound and generalization error bound of the MVLDM. In the experiments, we constructed a new performance metric, the view consistency rate (VCR), for the characteristics of multiview data. The effectiveness of MVLDM was evaluated using both VCR and other traditional performance metrics. The experimental results show that MVLDM is superior to other benchmark methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call