Abstract

In the past years, a number of benchmarks have been developed to characterize dynamic optimization problems (DOPs) consisting of a series of static problems over time. The solutions found for a static problem in a previous environment are required to be completely implemented so that the static problems in future environments are independent of the implementation of the solutions in the previous environment. Nevertheless, there is a wide range of real-world DOPs in which the problems in future environments are considerably influenced by the components of the solutions that are not implemented in previous environments, since the optimization for the problem in each environment continuously proceeds while the solutions are continuously implemented until the end of a working day or makespan. This type of DOPs can be termed as an online DOP (OL-DOP). To compensate for the lack of a systematical OL-DOP test suite, in this study we propose a benchmark generator for online dynamic single-objective and multi-objective optimization problems. Specifically, different types of influences of the solutions found in each environment on the problems in the next environment can be adjusted by different types of functions, and the dynamism degree can be tuned by a set of predefined parameters in these functions. Based on the proposed generator, we suggest a test suite consisting of ten continuous OL-DOPs and two discrete OL-DOPs. The empirical results demonstrate that the suggested OL-DOP test suite is characterized by time-deception in comparison with existing DOP benchmark test suites, and is able to analyze the ability of dynamic optimization algorithms in tackling the influence of the solutions found in each environment on the problem in the succeeding environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call