Abstract

We study the efficient numerical solution of linear inverse problems with operator valued data which arise, e.g., in seismic exploration, inverse scattering, or tomographic imaging. The high-dimensionality of the data space implies extremely high computational cost already for the evaluation of the forward operator which makes a numerical solution of the inverse problem, e.g., by iterative regularization methods, practically infeasible. To overcome this obstacle, we take advantage of the underlying tensor product structure of the problem and propose a strategy for constructing low-dimensional certified reduced order models of quasi-optimal rank for the forward operator which can be computed much more efficiently than the truncated singular value decomposition. A complete analysis of the proposed model reduction approach is given in a functional analytic setting and the efficient numerical construction of the reduced order models as well as of their application for the numerical solution of the inverse problem is discussed. In summary, the setup of a low-rank approximation can be achieved in an offline stage at essentially the same cost as a single evaluation of the forward operator, while the actual solution of the inverse problem in the online phase can be done with extremely high efficiency. The theoretical results are illustrated by application to a typical model problem in fluorescence optical tomography.

Highlights

  • IntroductionWith V , D(c), and U again denoting appropriate linear operators

  • We consider the numerical solution of linear inverse problems with operator valued data modeled by abstract operator equations T (c) = Mδ. (1.1)Extended author information available on the last page of the articleHere c ∈ X is the quantity to be determined and we assume that Mδ : Y → Z, representing the possibly perturbed measurements, is a linear operator of HilbertSchmidt class between Hilbert spaces Y and Z, the dual of Z

  • It turns out that the proposed two-step construction of the approximation TN, which is based on the underlying tensor-product structure of the problem, has significant advantages compared to the truncated singular value decomposition in the setup, i.e., TN can be computed at the computational cost of essentially one single evaluation of the forward operator T (c)

Read more

Summary

Introduction

With V , D(c), and U again denoting appropriate linear operators Problems of this kind arise in a variety of applications, e.g. in fluorescence tomography [1,30], inverse scattering [6,13], or source identification [17], and as linearizations of related nonlinear inverse problems, see e.g., [8,34] or [23] and the references given there. In such applications, U typically models the propagation of excitation fields generated by the sources, D describes the interaction with the medium to be probed, and V models the emitted fields which can be recorded by the detectors. We briefly outline our basic approach towards the numerical solution of (1.1)–(1.2) and report about related work in the literature

Regularized inversion
Model reduction and computational complexity
Low-rank approximations
Contributions and outline of the paper
Sparse tensor product compression
Recompression
Summary of basic properties
Outline
Notation
Preliminaries and basic assumptions
Sparse tensor product approximation
Quasi-optimal low-rank approximation
Summary
Model equations
Forward operator
Algorithmic realization and complexity estimates
Problem setup
Truth approximation
Orthonormalization
Low-rank approximations for U and V
Hyperbolic cross approximation
Final recompression
Online phase
Computational results
Problem initialization
Model reduction—offline phase
Forward evaluation
Truncated singular value decomposition
Setup of reduced order model
Solution of inverse problem – online phase
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call