Abstract

We study Newton type methods for inverse problems described by nonlinear operator equations $$F(u)=g$$ in Banach spaces where the Newton equations $$F^{\prime }(u_n;u_{n+1}-u_n) = g-F(u_n)$$ are regularized variationally using a general data misfit functional and a convex regularization term. This generalizes the well-known iteratively regularized Gauss–Newton method (IRGNM). We prove convergence and convergence rates as the noise level tends to $$0$$ both for an a priori stopping rule and for a Lepskiĭ-type a posteriori stopping rule. Our analysis includes previous order optimal convergence rate results for the IRGNM as special cases. The main focus of this paper is on inverse problems with Poisson data where the natural data misfit functional is given by the Kullback–Leibler divergence. Two examples of such problems are discussed in detail: an inverse obstacle scattering problem with amplitude data of the far-field pattern and a phase retrieval problem. The performance of the proposed method for these problems is illustrated in numerical examples.

Highlights

  • This study has been motivated by applications in photonic imaging, e.g. positron emission tomography [45], deconvolution problems in astronomy and microscopy [8], T

  • The inverse problem of recovering the information on the object of interest from such photon counts can be formulated as an operator equation

  • ≤ CtcT g†; F (v) + ηT g†; F (u) for all u, v ∈ B. (9b). This condition ensures that the nonlinearity of F fits together with the data misfit functionals S or T

Read more

Summary

Introduction

This study has been motivated by applications in photonic imaging, e.g. positron emission tomography [45], deconvolution problems in astronomy and microscopy [8], T. We add a proper convex penalty functional R : X → This leads to the iteratively regularized Newton-type method un+1 ∈ argmin S gobs; F (un) + F (un; u − un) + αnR (u). Α > 0 is a regularization parameter For nonlinear operators this is in general a non-convex optimization problem even if S gobs; · and R are convex. It is necessary to use more general formulations of the noise level and the tangential cone condition, which controls the degree of nonlinearity of the operator F Both coincide with the usual assumptions if S is given by a norm.

Assumptions and convergence theorem with a priori stopping rule
A Lepskiı-type stopping rule and additive source conditions
CtcCerr
Lepskiı-type stopping rule
Relation to previous results
Convergence analysis for Poisson data
Applications and computed examples
Hs with s
L2 is minimal for n
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call