Abstract

Block coordinate descent (BCD) methods approach optimization problems by performing gradient steps along alternating subgroups of coordinates. This is in contrast to full gradient descent, where a gradient step updates all coordinates simultaneously. BCD has been demonstrated to accelerate the gradient method in many practical large-scale applications. Despite its success no convergence analysis for inverse problems is known so far. In this paper, we investigate the BCD method for solving linear inverse problems. As the main theoretical result, we show that for operators having a particular tensor product form, the BCD method combined with an appropriate stopping criterion yields a convergent regularization method. To illustrate the theory, we perform numerical experiments comparing the BCD and the full gradient descent method for a system of integral equations. We also present numerical tests for a nonlinear inverse problem not covered by our theory, namely one-step inversion in multispectral X-ray tomography.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.