Abstract

Subspace clustering methods with sparsity prior, such as Sparse Subspace Clustering (SSC) [1], are effective in partitioning the data that lie in a union of subspaces. Most of those methods require certain assumptions, e.g. independence or disjointness, on the subspaces. These assumptions are not guaranteed to hold in practice and they limit the application of existing sparse subspace clustering methods. In this paper, we propose \(\ell ^{0}\)-induced sparse subspace clustering (\(\ell ^{0}\)-SSC). In contrast to the required assumptions, such as independence or disjointness, on subspaces for most existing sparse subspace clustering methods, we prove that subspace-sparse representation, a key element in subspace clustering, can be obtained by \(\ell ^{0}\)-SSC for arbitrary distinct underlying subspaces almost surely under the mild i.i.d. assumption on the data generation. We also present the “no free lunch” theorem that obtaining the subspace representation under our general assumptions can not be much computationally cheaper than solving the corresponding \(\ell ^{0}\) problem of \(\ell ^{0}\)-SSC. We develop a novel approximate algorithm named Approximate \(\ell ^{0}\)-SSC (\(\hbox {A}\ell ^{0}\)-SSC) that employs proximal gradient descent to obtain a sub-optimal solution to the optimization problem of \(\ell ^{0}\)-SSC with theoretical guarantee, and the sub-optimal solution is used to build a sparse similarity matrix for clustering. Extensive experimental results on various data sets demonstrate the superiority of \(\hbox {A}\ell ^{0}\)-SSC compared to other competing clustering methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call