Abstract

As a pioneering method, tensor robust principal component analysis (TRPCA) can separate an underlying low-rank component and a sparse component from the original data by minimizing a convex objective function composed of tensor nuclear norm and l1-norm. However, it has two limitations. One is that tensor nuclear norm, as a constraint on the low-rank component, treats all singular values uniformly, ignoring the differences among singular values. In essence, this constraint is a sparse constraint that is imposed on singular values of the low-rank component by l1-norm. The other is that l1-norm is used as a constraint for the sparse component. However, l1-norm is a loose constraint, leading to the solutions of TRPCA deviating from the authentic ones. To alleviate these issues, we propose a TPRCA model called p-TRPCA that utilizes the lp-norm to impose sparse constraints on both the singular values and the sparse component simultaneously. The lp quasi-norm (0<p<1) is a tighter constraint than l1-norm, which enhances the low-rankness and sparsity of the proposed model. To solve p-TRPCA, we present an effective algorithm based on the alternating direction method of multipliers and also analyze its convergence. Extensive experiments are performed on data simulation, image recovery, and background modeling. Experimental results show that our p-TRPCA outperforms TRPCA and its variants.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call