Abstract

In this paper, we consider Nesterov’s accelerated gradient method for solving nonlinear inverse and ill-posed problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to promising results for ill-posed problems. Here, we provide convergence analysis of this method for ill-posed problems based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.

Highlights

  • In this paper, consider nonlinear inverse problems of the form F (x) = y, (1.1)where F : D(F ) ⊂ X → Y is a continuously Frechet-differentiable, nonlinear operator between real Hilbert spaces X and Y

  • Since we are interested in ill-posed problems, we need to use regularization methods in order to obtain stable approximations of solutions of (1.1)

  • Under very mild assumptions on F, it can be shown that the minimizers of Tαδ, usually denoted by xδα, converge subsequentially to a minimum norm solution x† as δ → 0, given that α and the noise level δ are coupled in an appropriate way [9]

Read more

Summary

Introduction

Where F : D(F ) ⊂ X → Y is a continuously Frechet-differentiable, nonlinear operator between real Hilbert spaces X and Y. In case that the residual functional Φδ(x) is locally convex, one could think of using methods from convex optimization to minimize Φδ(x), instead of using the gradient method like in Landweber iteration One of those methods, which works remarkably well for nonlinear, convex and well-posed optimization problems of the form min{Φ(x) | x ∈ X }. Even O(k−2) if α > 3, which is again much faster than ordinary first order methods for minimizing (1.14) This accelerating property was exploited in the highly successful FISTA algorithm [4], designed for the fast solution of linear ill-posed problems with sparsity constraints. In case that the operator F is linear, Neubauer showed in [28] that, combined with a suitable stopping rule and under a source condition, (1.18) gives rise to a convergent regularization method and that convergence rates can be obtained.

Convergence Analysis I
Convergence Analysis II
Strong Convexity and Nonlinearity Conditions
Example Problems
Example 1 - Nonlinear Diagonal Operator
Example 2 - Auto-Convolution Operator
Further Examples
Support and Acknowledgements
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call