Abstract

We analyze the convergence properties of two Newton-type algorithms for the solution of unconstrained nonlinear optimization problems with convex substructure: Generalized Gauss-Newton (GGN) and Sequential Convex Programming (SCP). While both algorithms are identical to the classical Gauss-Newton method for the special case of nonlinear least squares, they differ when applied to more general convex outer functions. We show under mild assumptions that GGN and SCP have locally linear convergence with the same contraction rate. The convergence or divergence rate can be characterized as the smallest scalar that satisfies two linear matrix inequalities. We further show that bad convergence or divergence at a given local minimum can be a desirable property in the context of estimation problems with symmetric likelihood functions, because it avoids that the algorithm is attracted by statistically undesirable local minima. Both algorithms and their convergence properties are illustrated with a numerical example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call