Abstract

Robust principal component analysis (RPCA) [J. Candès et al., J. ACM, 58 (2011), pp. 1--37] and low-rank matrix completion [B. Recht, M. Fazel, and P. A. Parrilo, SIAM Rev., 52 (2010), pp. 471--501] are extensions of PCA that allow for outliers and missing entries, respectively. It is well known that solving these problems requires a low coherence between the low-rank matrix and the canonical basis, since in the extreme cases---when the low-rank matrix we wish to recover is also sparse--- there is an inherent ambiguity. However, in both problems the well-posedness issue is even more fundamental; in some cases, both RPCA and matrix completion can fail to have any solutions due to the set of low-rank plus sparse matrices not being closed, which in turn is equivalent to the notion of the matrix rigidity function not being lower semicontinuous [Kumar et al., Comput. Complex., 23 (2014), pp. 531--563]. By constructing infinite families of matrices, we derive bounds on the rank and sparsity such that the set of low-rank plus sparse matrices is not closed. We also demonstrate numerically that a wide range of nonconvex algorithms for both RPCA and matrix completion have diverging components when applied to our constructed matrices. This is analogous to the case of sets of higher order tensors not being closed under canonical polyadic (CP) tensor rank, rendering the best low-rank tensor approximation unsolvable [V. de Silva and L.-H. Lim, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 1084--1127] and hence encouraging the use of multilinear tensor rank [L. De Lathauwer, B. De Moor, and J. Vandewalle, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1324--1342].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call