Abstract

We propose a combined tensor format, which encapsulates the benefits of Tucker, tensor train (TT), and quantized TT (QTT) formats. The structure is composed of subtensors in TT representations, so the approximation problem is proven to be stable. We describe all important algebraic and optimization operations, which are recast to the TT routines. Several examples on explicit function and operator representations are provided. The asymptotic storage complexity is at most cubic in the rank parameter, that is larger than for the global QTT approximation, but the numerical examples manifest that the ranks in the two-level format usually increase more slowly with the approximation accuracy than the QTT ones. In particular, we observe that high rank peaks, which usually occur in the TT/QTT representations, are significantly relaxed. Thus the reduced costs can be achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call