Abstract

In order to treat high-dimensional problems, one has to find data-sparse representations. Starting with a six-dimensional problem, we first introduce the low-rank approximation of matrices. One purpose is the reduction of memory requirements, another advantage is that now vector operations instead of matrix operations can be applied. In the considered problem, the vectors correspond to grid functions defined on a three-dimensional grid. This leads to the next separation: these grid functions are tensors in mathbb {R}^{n}otimes mathbb {R}^{n}otimes mathbb {R}^{n} and can be represented by the hierarchical tensor format. Typical operations as the Hadamard product and the convolution are now reduced to operations between mathbb {R}^{n} vectors. Standard algorithms for operations with vectors from mathbb {R}^{n} are of order mathcal {O}(n) or larger. The tensorisation method is a representation method introducing additional data-sparsity. In many cases, the data size can be reduced from mathcal {O}(n) to mathcal {O}(log n). Even more important, operations as the convolution can be performed with a cost corresponding to these data sizes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.