Abstract

We present a new Newton-like method for large-scale unconstrained nonconvex minimization. And a new straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation is deduced to construct the trust region subproblem, in which the information of both the function value and gradient is used to construct approximate Hessian. The global convergence of the algorithm is proved. Numerical results indicate that the proposed method is competitive and efficient on some classical large-scale nonconvex test problems.

Highlights

  • We consider the following unconstrained optimization: minf (x), x∈Rn (1)where f : Rn → R is continuously differentiable

  • In Step 2, using CG-Steihaug algorithm in [3] to solve the subproblem (2), the algorithm is suitable for solving largescale unconstrained optimization

  • The contrast tests are called NTR, which is the same as NLMTR except that Bk is updated by BFGS formula

Read more

Summary

Introduction

Where f : Rn → R is continuously differentiable. Trust region methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14] are robust, can be applied to ill-conditioned problems, and have strong global convergence properties. Newton’s method has been efficiently safeguarded to ensure its global convergence to first- and even second-order critical points, in the presence of local nonconvexity of the objective using line search [3], trust region [4], or other regularization techniques [9, 13]. We deduce a new straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation, which uses both available gradient and function value information, to construct the trust region subproblem.

The Modified Limited Memory Quasi-Newton Formula
Newton-Like Trust Region Method
Numerical Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call