Abstract
In this paper, we consider Nesterov’s accelerated gradient method for solving nonlinear inverse and ill-posed problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to promising results for ill-posed problems. Here, we provide convergence analysis of this method for ill-posed problems based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.
Highlights
In this paper, consider nonlinear inverse problems of the form F (x) = y, (1.1)where F : D(F ) ⊂ X → Y is a continuously Frechet-differentiable, nonlinear operator between real Hilbert spaces X and Y
Since we are interested in ill-posed problems, we need to use regularization methods in order to obtain stable approximations of solutions of (1.1)
Under very mild assumptions on F, it can be shown that the minimizers of Tαδ, usually denoted by xδα, converge subsequentially to a minimum norm solution x† as δ → 0, given that α and the noise level δ are coupled in an appropriate way [9]
Summary
Where F : D(F ) ⊂ X → Y is a continuously Frechet-differentiable, nonlinear operator between real Hilbert spaces X and Y. In case that the residual functional Φδ(x) is locally convex, one could think of using methods from convex optimization to minimize Φδ(x), instead of using the gradient method like in Landweber iteration One of those methods, which works remarkably well for nonlinear, convex and well-posed optimization problems of the form min{Φ(x) | x ∈ X }. Even O(k−2) if α > 3, which is again much faster than ordinary first order methods for minimizing (1.14) This accelerating property was exploited in the highly successful FISTA algorithm [4], designed for the fast solution of linear ill-posed problems with sparsity constraints. In case that the operator F is linear, Neubauer showed in [28] that, combined with a suitable stopping rule and under a source condition, (1.18) gives rise to a convergent regularization method and that convergence rates can be obtained.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.