Abstract

The minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming algorithms. When the number of variables is large, one of the most widely used strategies is to project the original problem into a small dimensional subspace. In this paper, we introduce an algorithm for solving nonlinear least squares problems. This algorithm is based on constructing a basis for the Krylov subspace in conjunction with a model trust region technique to choose the step. The computational step on the small dimensional subspace lies inside the trust region. The Krylov subspace is terminated such that the termination condition allows the gradient to be decreased on it. A convergence theory of this algorithm is presented. It is shown that this algorithm is globally convergent.

Highlights

  • Nonlinear least squares NLS problems are unconstrained optimization problems with special structures. These problems arise in many aspects such as the solution of overdetermined systems of nonlinear equations, some scientific experiments, pattern recognition, and maximum likelihood estimation

  • The first type is most closely related to solving systems of nonlinear equations 2 and it leads to International Journal of Mathematics and Mathematical Sciences

  • An inner iteration is performed which consists of using the current trust region radius, Δk, and the information contained in the quadratic model to compute a step, s Δk

Read more

Summary

Introduction

Nonlinear least squares NLS problems are unconstrained optimization problems with special structures. The presented algorithm is a Newton-Krylov type algorithm It requires a fixed-size limited storage proportional to the size of the problem and relies only upon matrix vector product. The trial computational step in these methods is to find an approximate minimizer of some model of the true objective function within a trust region for which a suitable norm of the correction lies inside a given bound. The recent work of Sorensen, provides an algorithm which is based on recasting the trust region subproblem into a parameterized eigenvalue problem This algorithm provides a super linearly convergent scheme to adjust the parameter and find the optimal solution from the eigenvector of the parameterized problem, as long as the hard case does not occur. Concluding remarks and future ideas are given in the last section

Structure of the Problem
Algorithmic Framework
The Restarting Mechanism
Global Convergence Analysis
Concluding Remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call