Abstract
Auto-encoder (AE)-based deep subspace clustering (DSC) methods aim to partition high-dimensional data into underlying clusters, where each cluster corresponds to a subspace. As a standard module in current AE-based DSC, the self-reconstruction cost plays an essential role in regularizing the feature learning. However, the self-reconstruction adversely affects the discriminative feature learning of AE, thereby hampering the downstream subspace clustering. To address this issue, we propose a hypergraph-supervised reconstruction to replace the self-reconstruction. Specifically, instead of enforcing the decoder in the AE to merely reconstruct samples themselves, the hypergraph-supervised reconstruction encourages reconstructing samples according to their high-order neighborhood relations. By the back-propagation training, the hypergraph-supervised reconstruction cost enables the deep AE to capture the high-order structure information among samples, facilitating the discriminative feature learning and, thus, alleviating the adverse effect of the self-reconstruction cost. Compared to current DSC methods, relying on the self-reconstruction, our method has achieved consistent performance improvement on benchmark high-dimensional datasets.
Highlights
Auto-encoder (AE)-based deep subspace clustering (DSC) methods aim to partition highdimensional data into underlying clusters, where each cluster corresponds to a subspace
To train hypergraph-supervised deep subspace clustering method (H-DSC), we propose a two-phase strategy: (1) pre-train the deep AE to provide an initialization; (2) train the whole deep subspace clustering network, H-DSC, with the above cost function
We present the empirical evidence for the effectiveness of our proposed Hypergraph-supervised deep subspace clustering (H-DSC)
Summary
Subspace clustering has drawn increasing attention in computational intelligence and data mining [1,2]. Zhang et al [13] proposed a self-supervised convolutional subspace clustering network (S2 ConvSCN) that enhances the deep discriminative feature with pseudo-labels They insert a fully-connected (FC) layer, coupled with a softmax function after the encoder. It is shown that selfreconstruction cost over-regularizes the deep AE and degrades the discriminability of the latent representation of the encoder [15,16] Through back-propagation training, this hypergraph-supervised cost function allows the encoder to capture high-order neighborhood structure information [20] In this sense, the hypergraph-supervised reconstruction cost enhances the discriminative feature learning of the encoder and mitigates the adverse effect of the self-reconstruction, leading to an improved clustering performance. We incorporate Lcls for pseudo-label training to improve clustering performance
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.