Abstract

We study linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We demonstrate that regularization by projection and variational regularization can be formulated by using the training data only and without making use of the forward operator. We study convergence and stability of the regularized solutions in view of Seidman (1980 J. Optim. Theory Appl. 30 535), who showed that regularization by projection is not convergent in general, by giving some insight on the generality of Seidman’s nonconvergence example. Moreover, we show, analytically and numerically, that regularization by projection is indeed capable of learning linear operators, such as the Radon transform.

Highlights

  • Linear inverse problems are concerned with the reconstruction of a quantity u ∈ U from indirect measurements y ∈ Y which are related by the linear forward operator A : U → Y

  • We demonstrate that regularisation by projection and variational regularisation can be formulated by using the training data only and without making use of the forward operator

  • We demonstrate that the amount of training data plays the role of a regularisation parameter, for noisy training data the size of the training set should be chosen in agreement with the noise level

Read more

Summary

Introduction

Linear inverse problems are concerned with the reconstruction of a quantity u ∈ U from indirect measurements y ∈ Y which are related by the linear forward operator A : U → Y. Our main goal in this article is to carry over some classical results in regularisation theory to the model-free setting, with an overarching theme of using projections on subspaces defined by training data in the framework of regularisation by projection. This perspective requires new, data driven regularity conditions, for which we show a relation to source conditions in special cases, whilst in general such relationship remains an open question. This is in accordance with the results in [24, Thm. 4.2] on training neural networks from noisy data, where the number of neurons in the network plays a role similar to the number of training inputs in our setting

Setting and Main Assumptions
Regularisation by Projection
A reconstruction formula
Weak convergence
Gram-Schmidt orthogonalisation in U
Seidman’s nonconvergence example
Strong convergence
Noisy data
Dual Least Squares
Variational Regularisation
Convergence analysis
Numerical Experiments
Dataset
Gram-Schmidt orthogonalisation vs Householder reflections
Forward operator
Absence of inverse crime
Regularisation by projection
Dual least squares
Variational regularisation
Practical recommendations
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.