Abstract

Low-rank self-representation based subspace learning has confirmed its great effectiveness in a broad range of applications. Nevertheless, existing studies mainly focus on exploring the global linear subspace structure, and cannot commendably handle the case where the samples approximately (i.e., the samples contain data errors) lie in several more general affine subspaces. To overcome this drawback, in this paper, we innovatively propose to introduce affine and nonnegative constraints into low-rank self-representation learning. While simple enough, we provide their underlying theoretical insight from a geometric perspective. The union of two constraints geometrically restricts each sample to be expressed as a convex combination of other samples in the same subspace. In this way, when exploring the global affine subspace structure, we can also consider the specific local distribution of data in each subspace. To comprehensively demonstrate the benefits of introducing two constraints, we instantiate three low-rank self-representation methods ranging from single-view low-rank matrix learning to multi-view low-rank tensor learning. We carefully design the solution algorithms to efficiently optimize the proposed three approaches. Extensive experiments are conducted on three typical tasks, including single-view subspace clustering, multi-view subspace clustering, and multi-view semi-supervised classification. The notably superior experimental results powerfully verify the effectiveness of our proposals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call