Abstract
Margin distribution has been proven to play a crucial role in improving generalization ability. In recent studies, many methods are designed using large margin distribution machine (LDM), which combines margin distribution with support vector machine (SVM), such that a better performance can be achieved. However, these methods are usually proposed based on single-view data and ignore the connection between different views. In this article, we propose a new multiview margin distribution model, called MVLDM, which constructs both multiview margin mean and variance. Besides, a framework is proposed to achieve multiview learning (MVL). MVLDM provides a new way to explore the utilization of complementary information in MVL from the perspective of margin distribution and satisfies both the consistency principle and the complementarity principle. In the theoretical analysis, we used Rademacher complexity theory to analyze the consistency error bound and generalization error bound of the MVLDM. In the experiments, we constructed a new performance metric, the view consistency rate (VCR), for the characteristics of multiview data. The effectiveness of MVLDM was evaluated using both VCR and other traditional performance metrics. The experimental results show that MVLDM is superior to other benchmark methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.