Abstract

In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given tensor by a tensor that is sparsely representable. For matrices, i.e. 2-tensors, such a representation can be obtained via the singular value decomposition, which allows to compute best rank k-approximations. For very big matrices a low rank approximation using SVD is not computationally feasible. In this case different approximations are available. It seems that variants of the CUR-decomposition are most suitable. For d-mode tensors \(\mathcal{T} \in \otimes _{i=1}^{d}\mathbb{R}^{n_{i}}\), with d > 2, many generalizations of the singular value decomposition have been proposed to obtain low tensor rank decompositions. The most appropriate approximation seems to be best (r 1, …, r d )-approximation, which maximizes the l 2 norm of the projection of \(\mathcal{T}\) on ⊗ i = 1 d U i , where U i is an r i -dimensional subspace \(\mathbb{R}^{n_{i}}\). One of the most common methods is the alternating maximization method (AMM). It is obtained by maximizing on one subspace U i , while keeping all other fixed, and alternating the procedure repeatedly for i = 1, …, d. Usually, AMM will converge to a local best approximation. This approximation is a fixed point of a corresponding map on Grassmannians. We suggest a Newton method for finding the corresponding fixed point. We also discuss variants of CUR-approximation method for tensors. The first part of the paper is a survey on low rank approximation of tensors. The second new part of this paper is a new Newton method for best (r 1, …, r d )-approximation. We compare numerically different approximation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call