Abstract
This paper presents a new memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi-step iteration information to generate a new iteration and add the freedom of some parameters. Therefore it is suitable to solve large scale unconstrained optimization problems. The global convergence is proved under some mild conditions. Numerical experiments show the algorithm is efficient in many situations.
Highlights
Consider the unconstrained optimization problem min f (x), x Rn, (1)where Rn is an n-dimensional Euclidean space and f : Rn R1 is a continuously differentiable function
In order to make full use of the current and previous multi-step iterative information to improve the capability of methods and guarantee them be convergent, some scholars studied memory gradient methods and super-memory gradient methods. These two methods, like conjugate gradient methods, are suitable to solve large scale optimization problems. They are more stable than conjugate gradient methods (e.g., (Cragg, E.E., and Levy, A.V., 1969)(Shi Zhenjun, and Shen J., 2005)(Shi Zhenjun, 2003), etc.), because they use more previous iterative information and add the freedom of selecting parameters
Taking advantage of the line search rule that was presented in (Shi Zhenjun, and Shen J., 2005), this paper presents a new memory gradient method and proves its global convergence under some mild conditions
Summary
Consider the unconstrained optimization problem min f (x), x Rn , (1)where Rn is an n-dimensional Euclidean space and f : Rn R1 is a continuously differentiable function. Where dk is a search direction of f (x) at xk and k is a positive step-size. Let xk be the current iterative point, we denote f (xk ) by gk , f (xk ) by fk and f (x* ) by f * , respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.