Abstract

We study the solution of minimax problems min xmax yG(x) + 〈K(x), y〉 − F*(y) in finite-dimensional Hilbert spaces. The functionals G and F* we assume to be convex, but the operator K we allow to be nonlinear. We formulate a natural extension of the modified primal–dual hybrid gradient method, originally for linear K, due to Chambolle and Pock. We prove the local convergence of the method, provided various technical conditions are satisfied. These include in particular the Aubin property of the inverse of a monotone operator at the solution. Of particular interest to us is the case arising from Tikhonov type regularization of inverse problems with nonlinear forward operators. Mainly we are interested in total variation and second-order total generalized variation priors. For such problems, we show that our general local convergence result holds when the noise level of the data f is low, and the regularization parameter α is correspondingly small. We verify the numerical performance of the method by applying it to problems from magnetic resonance imaging (MRI) in chemical engineering and medicine. The specific applications are in diffusion tensor imaging and MR velocity imaging. These numerical studies show very promising performance.

Highlights

  • Let us be given convex, proper, lower semicontinuous functionals G : X → R and F ∗ : Y → R on finite-dimensional Hilbert spaces X and Y

  • To start the convergence analysis of Algorithm 2.1, we study the application of the standard ChambollePock method (2.3) to linearisations of problem (P) at a base point x ∈ X

  • These will be used 4.3 and Section 4.4 to study important special cases. These include in particular total variation (TV) and total generalised variation (TGV2) [5] regularisation

Read more

Summary

Introduction

Let us be given convex, proper, lower semicontinuous functionals G : X → R and F ∗ : Y → R on finite-dimensional Hilbert spaces X and Y. Observe that F ∗ is strongly convex in the range of the non-linear part of K, corresponding to T Under exactly this kind of structural assumptions, along with strict complementarity and non-degeneracy assumptions from the solution, we can show in Section 4 that Hx−1 possesses the Aubin property required for the general convergence theorem, Theorem 3.2, to hold. In this case the condition on y being small in the non-linear range of K corresponds to f − T (r, φ) being small.

The basics
The proposed method
Linearised problem and proximal point formulation
Basic descent estimate
Idea of convergence proof
A few remarks
Detailed analysis of the non-linear method
General assumptions
Auxiliary results
Convergence of the discrepancy term
Switching local norms: estimates from strong convexity
Aubin property of the inverse
Removing squares
Bridging local solutions
Combining the estimates
Some remarks
Lipschitz estimates
Differentials of set-valued maps
Bounds on Lipschitz factors
Regularisation functionals with L1-type norms
Squared L2 cost functional with L1-type regularisation
Applications and computational experience
Phase reconstruction for velocity-encoded MRI
Diffusion tensor imaging
Method
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.