Abstract

We present a discrepancy-based parameter choice and stopping rule for iterative algorithms performing approximate Tikhonov-functional minimization which adapts the regularization parameter value during the optimization procedure. The suggested parameter choice and stopping rule can be applied to a wide class of penalty terms and iterative algorithms which aim at Tikhonov regularization with a fixed parameter value. It leads, in particular, to computable guaranteed estimates for the regularized exact discrepancy in terms of numerical approximations. Based on these estimates, convergence to a solution is shown. As an example, the developed theory and the algorithm is applied to the case of sparse regularization. We prove order optimal convergence rates in the case of sparse regularization, i.e. weighted ℓp norms, which turn out to be the same as for the a priori parameter choice rule already obtained in the literature as well as for Morozov’s principle applied to exact regularized solutions. Finally, numerical results for two different minimization techniques, iterative soft thresholding algorithm and monotone fast iterative soft thresholding algorithm, are presented, confirming, in particular, the results from the theory.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.