Abstract

In this paper, we consider the Poisson equation on a “long” domain which is the Cartesian product of a one-dimensional long interval with a (d − 1)-dimensional domain. The right-hand side is assumed to have a rank-1 tensor structure. We will present and compare methods to construct approximations of the solution which have tensor structure and the computational effort is governed by only solving elliptic problems on lower-dimensional domains. A zero-th order tensor approximation is derived by using tools from asymptotic analysis (method 1). The resulting approximation is an elementary tensor and, hence has a fixed error which turns out to be very close to the best possible approximation of zero-th order. This approximation can be used as a starting guess for the derivation of higher-order tensor approximations by a greedy-type method (method 2). Numerical experiments show that this method is converging towards the exact solution. Method 3 is based on the derivation of a tensor approximation via exponential sums applied to discretized differential operators and their inverses. It can be proved that this method converges exponentially with respect to the tensor rank. We present numerical experiments which compare the performance and sensitivity of these three methods.

Highlights

  • In this paper, we consider elliptic partial differential equations on domains which are the Cartesian product of a “long” interval I = (− with a (d − 1)-dimensional domain ω, the cross section - a typical application is the modelling of a flow in long cylinders

  • In this paper, we consider the Poisson equation on a “long” domain which is the Cartesian product of a one-dimensional long interval with a (d − 1)-dimensional domain

  • 3 is based on the derivation of a tensor approximation via exponential sums applied to discretized differential operators and their inverses

Read more

Summary

Introduction

While the greedy method has the advantage that it can be combined with method 1 in a natural fashion, the numerical experiments which we have performed indicate that the convergence speed can slow down as the number of outer iterations increases This is well known and there are various strategies to accelerate the convergence speed such as Galerkin projection or one half-step of ALS on the low-rank factors. We emphasize that the explicit computation of the inverse of the discretisation matrix can be avoided by using the hierarchical format for their representation (see [15]) An advantage of this method is that a full theory is available which applies to our application and allows us to choose the tensor rank via an a priori error estimate. It can be shown that the tensor approximation converges exponentially with respect to the tensor rank (see [14])

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call