Abstract

To date, many efficient classification methods have been presented by utilizing the multi-view data’s rich information. Nevertheless, they commonly construct models by concatenating entire views together into the high-dimensional vectors while ignoring the individuality and relationship of views. Also, they often use fixed labels to perform classification, ignoring the requirement of the large margin between distinct classes. To address the above problems, we propose a new block-based multi-view classification model via view-based L2,p sparse representation and adaptive view fusion. Specifically, the model establishes the L2,p regularization in each view space to excavate the individuality information of views. Meanwhile, a newly proposed shared loss term across views is combined in the model to learn the complementarity and consistency information of views. The adaptive weighting is introduced to measure the contribution of distinct views while performing adaptive view fusion. The model also adopts slack labels to increase the distance of distinct classes. Furthermore, an Alternating Direction Method of Multipliers (ADMM) based algorithm is designed to solve the model through block calculation rapidly. And a strict theoretical proof of its convergence is provided. Extensive experiments demonstrate that the proposed method achieves superior performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.