Abstract

Many inverse problems are ill-posed in the sense that their solution is unstable with respect to data perturbations, and hence regularization methods have to be used for their stable solution. Two possible drawbacks of standard regularization methods are (a) saturation, i.e., only suboptimal approximations can be found for smooth solutions—this is the case, e.g., for Tikhonov regularization; (b) numerical effort, e.g., the large number of iterations of methods like Landweber iteration. A framework that allows us to overcome both effects at least for certain classes of inverse problems is regularization in Hilbert scales, where a solution is searched in a scale of spaces (Hilbert scale), but convergence is achieved in the original space. Regularization methods in Hilbert scales can be viewed as modified (preconditioned) versions of standard methods. In order to make the advantages of the Hilbert scale approach applicable to a new class of problems, we propose to use a scale of spaces over the image space instead. This results in a new family of methods which we call $\mathcal{Y}$-scale regularization, and whose (optimal) convergence properties are analyzed. One of the key steps in the analysis is the formulation of an adequate a posteriori stopping rule and the proof of optimal convergence rates. The theoretical results are illustrated in several numerical examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call