Abstract

Deep subspace clustering methods for multi-view have achieved impressive clustering performance over other clustering methods. However, the existing methods either cannot integrate the global and local information of multi-view or fail to explore the discriminative contributions among views. In this paper, we propose a novel multi-scale deep multi-view subspace clustering (MDMVSC) method, which unifies the multi-scale learning (ML) module, self-weighting fusion (SF) module and structure preserving (SP) constraint. Specifically, to take advantage of the complementarity and diversity of different views, ML module first learns specific self-representation matrix for each view from the multi-scale low-dimensional latent features with the global and local information. Then, using the SF module, these matrices are fused to obtain the consensus representation of multi-view via attention mechanism guided weights according to their discriminative contributions. Moreover, SP constraint encourages the multi-scale latent features to preserve the consistent structural information of the original multi-view for enhancing representation ability. Extensive experimental results on five datasets demonstrate the superiority of MDMVSC in comparison with several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call