Abstract

Multi-objective optimization problems (MOPs) arise in a natural way in diverse knowledge areas. Multi-objective evolutionary algorithms (MOEAs) have been applied successfully to solve this type of optimization problems over the last two decades. However, until now MOEAs need quite a few resources in order to obtain acceptable Pareto set/front approximations. Even more, in certain cases when the search space is highly constrained, MOEAs may have troubles when approximating the solution set. When dealing with constrained MOPs (CMOPs), MOEAs usually apply penalization methods. One possibility to overcome these situations is the hybridization of MOEAs with local search operators. If the local search operator is based on classical mathematical programming, gradient information is used, leading to a relatively high computational cost. In this work, we give an overview of our recently proposed constraint handling methods and their corresponding hybrid algorithms. These methods have specific mechanisms that deal with the constraints in a wiser way without increasing their cost. Both methods do not explicitly compute the gradients but extract this information in the best manner out of the current population of the MOEAs. We conjecture that these techniques will allow for the fast and reliable treatment of CMOPs in the near future. Numerical results indicate that these ideas already yield competitive results in many cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call