Abstract

We propose a combined tensor format, which encapsulates the benefits of Tucker, tensor train (TT), and quantized TT (QTT) formats. The structure is composed of subtensors in TT representations, so the approximation problem is proven to be stable. We describe all important algebraic and optimization operations, which are recast to the TT routines. Several examples on explicit function and operator representations are provided. The asymptotic storage complexity is at most cubic in the rank parameter, that is larger than for the global QTT approximation, but the numerical examples manifest that the ranks in the two-level format usually increase more slowly with the approximation accuracy than the QTT ones. In particular, we observe that high rank peaks, which usually occur in the TT/QTT representations, are significantly relaxed. Thus the reduced costs can be achieved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.