Abstract

A New Globally Convergent Self-Scaling Vm Algorithm for Convex and Nonconvex Optimization

Highlights

  • Consider the unconstrained optimization problem: min f (x) x Rn ...(1)Where f is a continuously differentiable function of n variables .Quasi-Newton methods for solving (1) often needs the new search direction dk 1 at each iteration given by: dk Hk gk ...( 2)Where gk 1 f the gradient of is f evaluated at the current iteration xk 1 (Storey & HU., 1993)

  • We have proposed a new self-scaling VM-type for unconstrained minimization based on a modified Quasi-Newton condition

  • We claim that the formula (26) - (27) will be more efficient and better than the standard Broyden - Fletcher-Goldfarb - Shanno (BFGS)

Read more

Summary

Introduction

Consider the unconstrained optimization problem: min f (x) x Rn ...(1). Where f is a continuously differentiable function of n variables .Quasi-. Newton methods for solving (1) often needs the new search direction dk 1 at each iteration given by: dk Hk gk ...( 2). Where gk 1 f (xk 1) the gradient of is f evaluated at the current iteration xk 1 (Storey & HU., 1993). One computes the iteration by the formula xk 1 xk k d k ...(3). Where the step size k satisfies the Wolfe – conditions f (xk kdk ) f (xk ) k d

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.