Abstract

Abstract In this paper, we consider the iteratively regularized Gauss–Newton method with frozen derivative and formulate its convergence rates in the settings of Banach spaces. The convergence rates of iteratively regularized Gauss–Newton method with frozen derivative are well studied via generalized source conditions. We utilize the recently developed concept of conditional stability of the inverse mapping to derive the convergence rates. Also, in order to show the practicality of this paper, we show that our results are applicable on an ill-posed inverse problem. Finally, we compare the convergence rates derived in this paper with the existing convergence rates in the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call