Abstract

Low-rank representation reveals a highly-informative entailment of sparse matrices, where double low-rank representation (DLRR) presents an effective solution by adopting nuclear norm. However, it is a special constraint of Schatten-p norm with p=1 which equally treats all singular values, deviating from the optimal low-rank representation that considers p=0. Thus, this paper improves the DLRR generalization of DLRR by relaxing p=1 into 0<p≤1 to tighten the low-rank constraint of the Schatten-p norm. With such a relaxation, low-rank optimization is then accelerated, resulting in a lower bound on the calculation complexity. Experiments on unsupervised feature extraction and subspace clustering demonstrate that our low-rank optimization taking 0<p≤1 achieves a superior performance against state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call