Abstract

We show that in finite-dimensional nonlinear approximations, the best $r$-term approximant of a function $f$ almost always exists over $\mathbb{C}$ but that the same is not true over $\mathbb{R}$, i.e., the infimum $\inf_{f_1,\dots,f_r \in Y} \lVert f - f_1 - \dots - f_r \rVert$ is almost always attainable by complex-valued functions $f_1,\dots, f_r$ in $Y$, a set of functions that have some desired structures. Our result extends to functions that possess special properties like symmetry or skew-symmetry under permutations of arguments. For the case where $Y$ is the set of separable functions, the problem becomes that of best rank-$r$ tensor approximations. We show that over $\mathbb{C}$, any tensor almost always has a unique best rank-$r$ approximation. This extends to other notions of tensor ranks such as symmetric rank and alternating rank, to best $r$-block-terms approximations, and to best approximations by tensor networks. When applied to sparse-plus-low-rank approximations, we obtain that for any given $r$ and $k$, a general tensor has a unique best approximation by a sum of a rank-$r$ tensor and a $k$-sparse tensor with a fixed sparsity pattern; this arises in, for example, estimation of covariance matrices of a Gaussian hidden variable model with $k$ observed variables conditionally independent given $r$ hidden variables. The existential (but not the uniqueness) part of our result also applies to best approximations by a sum of a rank-$r$ tensor and a $k$-sparse tensor with no fixed sparsity pattern, as well as to tensor completion problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call