Abstract

Selecting a proper set of test problems is essential for fair performance comparison of evolutionary multi-objective optimization (EMO) algorithms. This is because the comparison results strongly depend on the choice of test problems. Test problems are also very important for examining the behavior of each algorithm. In general, it is advisable to prepare a set of various test problems including both easy and difficult ones for each algorithm. Our idea is to use a meta-optimization technique for generating such a set of test problems. More specifically, we use a two-level meta-optimization model. In the upper level, test problems are optimized. That is, test problems are handled as solutions. In the lower level, each test problem is evaluated using multiple EMO algorithms. The point of our idea is high flexibility in the definition of an objective function in the upper level. For example, when we want to design a difficult test problem only for a particular EMO algorithm, the minimization of its relative performance can be used as an objective function. By maximizing its relative performance, we can also design an easy test problem only for that algorithm. By generating both easy and difficult problems for each algorithm in this manner, we can prepare an appropriate test problem set for fair performance comparison. Through computational experiments, we demonstrate that we can generate a wide variety of test problems, each of which is difficult for a different type of EMO algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call