Abstract
Multi-view clustering has attracted growing attention owing to its powerful capacity of multi-source information integration. Although numerous advanced methods have been proposed in past decades, most of them generally fail to distinguish the unequal importance of multiple views to the clustering task and overlook the scale uniformity of learned latent representation among different views, resulting in blurry physical meaning and suboptimal model performance. To address these issues, in this paper, we propose a joint learning framework, termed Adaptive-weighted deep Multi-view Clustering with Uniform scale representation (AMCU). Specifically, to achieve more reasonable multi-view fusion, we introduce an adaptive weighting strategy, which imposes simplex constraints on heterogeneous views for measuring their varying degrees of contribution to consensus prediction. Such a simple yet effective strategy shows its clear physical meaning for the multi-view clustering task. Furthermore, a novel regularizer is incorporated to learn multiple latent representations sharing approximately the same scale, so that the objective for calculating clustering loss cannot be sensitive to the views and thus the entire model training process can be guaranteed to be more stable as well. Through comprehensive experiments on eight popular real-world datasets, we demonstrate that our proposal performs better than several state-of-the-art single-view and multi-view competitors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.