Abstract

Iterative algorithms of Gauss–Newton type for the solution of nonlinear least squares problems are considered. They separate the variables into two sets in such a way that in each iteration, optimization with respect to the first set is performed first, and corrections to those of the second after that. The linear-nonlinear case, where the first set consists of variables that occur linearly, is given special attention, and a new algorithm is derived which is simpler to apply than the variable projection algorithm as described by Golub and Pereyra, and can be performed with no more arithmetical operations than the unseparated Gauss–Newton algorithm. A detailed analysis of the asymptotical convergence properties of both separated and unseparated algorithms is performed. It is found that they have comparable rates of convergence, and all converge almost quadratically for almost compatible problems. Simpler separation schemes, on the other hand, converge only linearly. An efficient and simple computer implementation is described. It uses $QR$ decompositions of appropriate derivative matrices. A series of numerical tests are reported, both on artificial and realistic data. It is found that the theoretical results on asymptotic behavior give a good prediction of the number of iterations needed in a test run. Typical applications are least squares approximation by exponential or rational functions, and overdetermined nonlinear systems of equations; examples are given of both.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call