Abstract

Subspace learning has many applications such as motion segmentation and image recognition. The existing algorithms based on self-expressiveness of samples for subspace learning may suffer from the unsuitable balance between the rank and sparsity of the expressive matrix. In this paper, a new model is proposed that can balance the rank and sparsity well. This model adopts the log-determinant function to control the rank of solution. Meanwhile, the diagonals are penalized, rather than the strict zero-restriction on diagonals. This strategy makes the rank–sparsity balance more tunable. We furthermore give a new graph construction from the low-rank and sparse solution, which absorbs the advantages of the graph constructions in the sparse subspace clustering and the low-rank representation for further clustering. Numerical experiments show that the new method, named as RSBR, can significantly increase the accuracy of subspace clustering on the real-world data sets that we tested.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.