Abstract

We consider the proximal gradient algorithm for solving penalized least-squares minimization problems arising in data science. This first-order algorithm is attractive due to its flexibility and minimal memory requirements allowing to tackle large-scale minimization problems involving non-smooth penalties. However, for problems such as x-ray computed tomography, the applicability of the algorithm is dominated by the cost of applying the forward linear operator and its adjoint at each iteration. In practice, the adjoint operator is thus often replaced by an alternative operator with the aim to reduce the overall computation burden and potentially improve conditioning issues. In this paper, we propose to analyze the effect of such an adjoint mismatch on the convergence of the proximal gradient algorithm in an infinite-dimensional setting, thus generalizing the existing results on PGA. We derive conditions on the step-size and on the gradient of the smooth part of the objective function under which convergence of the algorithm to a fixed point is guaranteed. We also derive bounds on the error between this point and the solution to the original minimization problem. We illustrate our theoretical findings with two image reconstruction tasks in computed tomography.

Highlights

  • Linear inverse problems arise when modeling phenomena from a broad range of reallife applications in image and signal processing

  • The adjoint H∗ is often replaced by an alternative operator, with the aim to increase the convergence rate, thanks to better conditioning, or to make efficient use of hardware accelerators and alleviate the total computation time. This strategy results in an adjoint mismatch that breaks the operator symmetry [29, 49], it is frequently used in tomographic transmission imaging [29], as practiced in industrial non-destructive testing and diagnostic medical imaging [8, 33] and it has been advocated in SPECT (Single Photon Emission Computed Tomography) imaging [44, 50]

  • In the context of an adjoint mismatch, operator H∗ is purposefully replaced by surrogate operators (Kn)n∈N, iteration (11) becoming: For every n ∈ N, xn+1 = xn + θn proxγg((1 − γκ)xn − γKn(Hxn − y)) − xn . (12) Hereafter, we list assumptions used throughout this paper, to analyze scheme (12)

Read more

Summary

Introduction

Linear inverse problems arise when modeling phenomena from a broad range of reallife applications in image and signal processing. When the objective function is a least-squares term without any regularization, PGA is reduced to a simple gradient algorithm In this context, adjoint mismatch has been investigated in the early work of [50] and in [24,27,39]. We propose to extend the theoretical ideas in [24] on PGA in the presence of adjoint mismatch to solve a penalized least-squares problem in an arbitrary Hilbert space For this kind of problems, the resulting algorithm can be seen as a generalization of PGA.

Notation and mathematical background
Proximal gradient algorithm for the penalized least-squares criterion
Mismatched algorithm
Properties of the modified gradient descent operator
Fixed points
Convergence result
Numerical experiments
Problem statement
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call