Abstract

Descent gradient methods are the most frequently used algorithms for computing regularizers of inverse problems. They are either directly applied to the discrepancy term, which measures the difference between operator evaluation and data or to a regularized version incorporating suitable penalty terms. In its basic form, gradient descent methods converge slowly.We aim at extending different optimization schemes, which have been recently introduced for accelerating these approaches, by addressing more general penalty terms. In particular we use a general setting in infinite Hilbert spaces and examine accelerated algorithms for regularization methods using total variation or sparsity constraints.To illustrate the efficiency of these algorithms, we apply them to a parameter identification problem in an elliptic partial differential equation using total variation regularization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call