Abstract

In this paper, we adapt a genetic algorithm for constrained optimization problems. We use a dynamic penalty approach along with some form of annealing, thus forcing the search to concentrate on feasible solutions as the algorithm progresses. We suggest two different general-purpose methods for guaranteeing convergence to a globally optimal (feasible) solution, neither of which makes any assumptions on the structure of the optimization problem. The former involves modifying the GA evolution operators to yield a Boltzmann-type distribution on populations. The latter incorporates a dynamic penalty along with a slow annealing of acceptance probabilities. We prove that, with probability one, both of these methods will converge to a globally optimal feasible state.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.