Abstract

Properly configuring Evolutionary Algorithms (EAs) is a challenging task made difficult by many different details that affect EAs’ performance, such as the properties of the fitness function, time and computational constraints, and many others. EAs’ meta-optimization methods, in which a metaheuristic is used to tune the parameters of another (lower-level) metaheuristic which optimizes a given target function, most often rely on the optimization of a single property of the lower-level method. In this paper, we show that by using a multi-objective genetic algorithm to tune an EA, it is possible not only to find good parameter sets considering more objectives at the same time but also to derive generalizable results which can provide guidelines for designing EA-based applications. In particular, we present a general framework for multi-objective meta-optimization, to show that “going multi-objective” allows one to generate configurations that, besides optimally fitting an EA to a given problem, also perform well on previously unseen ones.

Highlights

  • This paper investigates Evolutionary Algorithms (EAs) tuning from a multi-objective perspective

  • The importance of parameter tuning has been frequently addressed in the last years, in theoretical or review papers such as [12] and in papers with extensive experimental evidence which provide a critical assessment of such methods

  • If we term t the average time needed for a single run of the Level EA (LL-EA), the upper bound for the time T needed for the whole process is: T = t · · · N

Read more

Summary

Introduction

This paper investigates Evolutionary Algorithms (EAs) tuning from a multi-objective perspective. It can be argued that if the application of a Meta-EA can effectively lead to solutions that are closer to the global optimum for the problem at hand than those found by a standard setting of the algorithm that is being tuned, even supposing one uses several optimization meta-levels, the improvement margins for each higher-level Meta-EA become smaller and smaller with the level This intuitively implies that the variability of the results depending on the higher-level Meta-EAs parameter settings becomes smaller and smaller with the level. EMOPaT is aimed at finding parameter sets that achieve good results considering the nature of the problems, the quality indices and, more in general, the conditions under which the EA is tuned. In a separate appendix, we demonstrate that EMOPaT can be considered an extension of SEPaT and has equivalent performance in solving single-objective problems, as well as assessing its correct behavior by considering some controlled situations, on which we show it to be able to perform tuning as expected

Differential Evolution
Particle Swarm Optimization
NSGA-II
Related Work
Experimental Evaluation
Multi-Objective Single-Function Optimization Under Different Constraints
Method
Multi-Function Optimization
Summary and Future Work
Findings
Objective
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.