Abstract

The extension of the Hager–Zhang (HZ) nonlinear conjugate gradient method for vector optimization is discussed in the present research. In the scalar minimization case, this method generates descent directions whenever, for example, the line search satisfies the standard Wolfe conditions. We first show that, in general, the direct extension of the HZ method for vector optimization does not yield descent (in the vector sense) even when an exact line search is employed. By using a sufficiently accurate line search, we then propose a self-adjusting HZ method which possesses the descent property. The proposed HZ method with suitable parameters reduces to the classical one in the scalar minimization case. Global convergence of the new scheme is proved without regular restarts and any convex assumption. Finally, numerical experiments illustrating the practical behavior of the approach are presented, and comparisons with the Hestenes–Stiefel conjugate gradient and the steepest descent methods are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.