Abstract

In the domain of evolutionary computation, more and more attention has been paid to dynamic multiobjective optimization. Generally, artificial benchmarks are effective tools for the performance evaluation of dynamic multiobjective evolutionary algorithms (DMOEAs). After reviewing existing benchmarks and highlighting their weaknesses, this paper proposes a new benchmark suite to promote the comprehensive testing of algorithms. This proposed benchmark suite has eight random instances in which the randomness is produced by designed random time sequences. Also, this suite introduces challenging but rarely considered characteristics, including diverse features in fitness landscape (e.g. deception, multimodality, and bias) and complex trade-off geometries (e.g. convexity-concavity mixed geometry and disconnected geometry). Empirical studies have shown that the proposed benchmark poses reasonable challenges to DMOEAs in terms of convergence and diversity. Besides, a center matching strategy (CMS) is suggested to track random changes in these problems, which applies the history individual information in a global scope for population prediction. Compared with other reaction strategies, CMS has been demonstrated to be very competitive in dealing with random problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.