Abstract
The recently proposed quasi-Newton method for constrained optimization has very attractive local convergence properties. To force global convergnce of the method, a descent method which uses Zangwill's penalty function and an exact line search has been proposed by Han. In this paper a new method which adopts a differentiable penalty function and an approximate line is presented. The proposed penalty function has the form of the augmented Lagrangian function. An algorithm for updating parameters which appear in the penalty function is described. Global convergence of the given method is proved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.