Abstract

We consider a general class of nonlinear constrained optimization problems, where derivatives of the objective function and constraints are unavailable. This property of problems can often impede the performance of optimization algorithms. Most algorithms usually determine a quasi-Newton direction and then use line search techniques. We propose a smoothing algorithm without the need to use a penalty function. A new algorithm is developed to modify the trust region and to handle the constraints based on radial basis functions (RBFs). The value of the objective function is reduced according to the relation of the predicted reduction of constraint violation achieved by the trial step. At each iteration, the constraints are approximated by a quadratic model obtained by RBFs. The aim of the present work is to keep the good position for the interpolation points in order to obtain a proper approximation in a small trust region. The numerical results are presented for some standard test problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.