Abstract

In recent years, we have witnessed a surge of interest in multi-view representation learning. When facing multiple views that are highly related but sightly different from each other, most existing multi-view methods might fail to fully explore multi-view information. Additionally, pairwise correlations among multiple views often vary drastically, which makes multi-view representation challenging. Therefore, how to learn appropriate representation from multi-view information is still an open but challenging problem. To handle this issue, this paper proposes a novel multi-view learning method, named Multi-view Low-rank Preserving Embedding (MvLPE). It integrates all views into a common latent space, termed as centroid view, by minimizing the disagreement between centroid view and each view, which encourages different views to mutually learn from each other. Unlike existing methods with explicit weight definition, the proposed method could automatically allocate an ideal weight for each view according to its contribution. Besides, MvLPE could maintain its low-rank reconstruction structure for each view while integrating all views into centroid view. Since there is no closed-form solution for MvLPE, an effective algorithm based on iterative alternating strategy is provided to obtain the solution. The experiments on six benchmark datasets validate the effectiveness of the proposed method, which achieves superior performance over its counterparts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.