Abstract

To solve an unconstrained nonlinear minimization problem, we propose an optimal algorithm (OA) as well as a globally optimal algorithm (GOA), by deflecting the gradient direction to the best descent direction at each iteration step, and with an optimal parameter being derived explicitly. An invariant manifold defined for the model problem in terms of a locally quadratic function is used to derive a purely iterative algorithm and the convergence is proven. Then, the rank-two updating techniques of BFGS are employed, which result in several novel algorithms as being faster than the steepest descent method (SDM) and the variable metric method (DFP). Six numerical examples are examined and compared with exact solutions, revealing that the new algorithms of OA, GOA, and the updated ones have superior computational efficiency and accuracy.

Highlights

  • The steepest descent method (SDM), which can be traced back to Cauchy (1847), is the simplest gradient method for solving unconstrained minimization problems

  • The novel algorithm is named “an optimal algorithm (OA),” because in the local frame we have derived the optimal parameter of α in the descent direction, which is a linear combination of the gradient vector and a supplemental vector

  • We have demonstrated a critical descent vector to derive a globally optimal algorithm (GOA), which can substantially accelerate the convergence speed in the numerical solution of nonlinear minimization problem

Read more

Summary

Introduction

The steepest descent method (SDM), which can be traced back to Cauchy (1847), is the simplest gradient method for solving unconstrained minimization problems. Barzilai and Borwein [1] have presented a new choice of steplength through a two-point stepsize Their method did not guarantee the descent of the minimum function values, Barzilai and Borwein [1] were able to produce a substantial improvement of the convergence speed for a certain test of a quadratic function. Besides the SDM, there were many modifications of the conjugate gradient method for the unconstrained minimization problems, like Birgin and Martinez [11], Andrei [12,13,14], Zhang [15], Babaie-Kafaki et al [16], and Shi and Guo [17] There is another class method with the descent direction d in f(xk − λd) being taken to be D∇f(xk), where D is a positive definite matrix that approximates the inverse of the Hessian matrix A, which is usually named the quasiNewton method.

An Invariant Manifold
Numerical Methods
The Broyden-Fletcher-Goldfarb-Shanno Updating Techniques
Numerical Examples
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call