Abstract

In this paper, we consider the subspace optimization method for solving nonlinear inverse problems in Banach spaces, which is based on the sequential Bregman projections with uniformly convex penalty term. The penalty term is allowed to be non-smooth, including L1 and total variation like penalty functionals, to reconstruct the special features of solutions such as sparsity and discontinuities. Instead of just utilizing the current gradient like the Landweber iteration, the method uses multiple search directions in each iteration to accelerate the convergence. Moreover, their step lengths are calculated by the projection onto the subspace that contains the solution set of the unperturbed problem. Under certain assumptions, we present the detailed convergence analysis when the data is given exactly. For the data containing noise, we use the discrepancy principle as stopping rule and then obtain the regularization result of the method. Finally, some numerical simulations for parameter identification problems are provided to illustrate the effectiveness of capturing the property of exact solutions and the acceleration effect of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call