Abstract

Subspace clustering aims to cluster a collection of data points lying in a union of subspaces. Based on the assumption that each point can be approximately represented as a linear combination of other points, extensive efforts have been made to compute an affinity matrix in a self-expressive framework for describing the similarity between points. However, the existing clustering methods consider the average feature solutions, which would not be powerful enough to capture the intrinsic relationship between points. In this paper, we present the $k$ -support norm subspace clustering (KSC) method by utilizing $k-$ support norm regularization. The $k-$ support norm trades off the sparsity of $\ell _{1}$ norm and the uniform shrinkage of $\ell _{2}$ norm to yield better predictive performance on the data connection. The theoretical analysis of KSC makes up a large proportion of paper. In the noise-free case, we provide the kEBD condition, which ensures the coefficient matrix to be block diagonal. If the data are corrupted, we prove the incompletion-grouping effect for KSC. Moreover, we provide the statistical recovery guarantee for both noise-free and noise cases. The theory analyses show the validity and feasibility of KSC, and the experimental results on multiple challenging databases demonstrate the effectiveness of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.