Abstract

We provide an overview of a class of iterative convex approximation methods for nonlinear optimization problems with convex-over-nonlinear substructure. These problems are characterized by outer convexities on the one hand, and nonlinear, generally nonconvex, but differentiable functions on the other hand. All methods from this class use only first order derivatives of the nonlinear functions and sequentially solve convex optimization problems. All of them are different generalizations of the classical Gauss-Newton (GN) method. We focus on the smooth constrained case and on three methods to address it: Sequential Convex Programming (SCP), Sequential Convex Quadratic Programming (SCQP), and Sequential Quadratically Constrained Quadratic Programming (SQCQP). While the first two methods were previously known, the last is newly proposed and investigated in this paper. We show under mild assumptions that SCP, SCQP and SQCQP have exactly the same local linear convergence – or divergence – rate. We then discuss the special case in which the solution is fully determined by the active constraints, and show that for this case the KKT conditions are sufficient for local optimality and that SCP, SCQP and SQCQP even converge quadratically. In the context of parameter estimation with symmetric convex loss functions, the possible divergence of the methods can in fact be an advantage that helps them to avoid some undesirable local minima: generalizing existing results, we show that the presented methods converge to a local minimum if and only if this local minimum is stable against a mirroring operation applied to the measurement data of the estimation problem. All results are illustrated by numerical experiments on a tutorial example.

Highlights

  • Throughout this paper we consider nonlinear optimization problems of the form wm∈inRn φ0(F0(w)) s.t.Fi(w) ∈ Ωi, i = 1, . . . , q, g(w) = 0, (1)with nonlinear functions φ0 : Rm0 → R, v → φ0(v), Fi : Rn → Rmi and g : Rn → Rp

  • We present several methods that can be seen as generalizations of the Constrained Gauss-Newton method (CGN) method: we first present the smooth constrained variant of Sequential Convex Programming (SCP), as well as Sequential Convex Quadratic Programming (SCQP) [40], and Sequential Quadratically Constrained Quadratic Programming (SQCQP), a novel method which can be seen as an intermediary between SCP and SCQP

  • A special focus lay on methods for smooth constrained problems and for these we provided an analysis of local convergence

Read more

Summary

Introduction

Throughout this paper we consider nonlinear optimization problems of the form wm∈inRn φ0(F0(w)) s.t. Fi(w) ∈ Ωi, i = 1, . With nonlinear functions φ0 : Rm0 → R, v → φ0(v), Fi : Rn → Rmi and g : Rn → Rp. Critically, the function φ0(v) and the sets Ωi are assumed to be convex. The problem is characterized by “convex-overnonlinear” substructures φ0(F0(w)) and Fi(w) ∈ Ωi. We call φ0 and the Ωi “outer convexities” and Fi the “inner nonlinearities”. A special – but still quite general – case is when the sets Ωi can be described by smooth

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call