Abstract

The evolutionary algorithms (EAs) are a family of nature-inspired algorithms widely used for solving complex optimization problems. Since the operators (e.g. crossover, mutation, selection) in most traditional EAs are developed on the basis of fixed heuristic rules or strategies, they are unable to learn the structures or properties of the problems to be optimized. To equip the EAs with learning abilities, recently, various model-based evolutionary algorithms (MBEAs) have been proposed. This survey briefly reviews some representative MBEAs by considering three different motivations of using models. First, the most commonly seen motivation of using models is to estimate the distribution of the candidate solutions. Second, in evolutionary multi-objective optimization, one motivation of using models is to build the inverse models from the objective space to the decision space. Third, when solving computationally expensive problems, models can be used as surrogates of the fitness functions. Based on the review, some further discussions are also given.

Highlights

  • If there is only one objective, i.e., m = 1, the problems are often known as single-objective optimization problem (SOPs); while if there is more than one objective function, i.e., m > 1, the problems are often known as multi-objective optimization problems (MOPs) [2]

  • In spite of the various technical details, we find three main motivations of using ML models in evolutionary algorithms (EAs): (1) building estimation models in the decision space, (2) building inverse models to map from the objective space to the decision space, and (3) building surrogate models for the fitness functions

  • In the Bayesian multi-objective optimization algorithm (BMOA) [29], the selection operator is based on a -archive, where a minimal set of candidate solutions that -dominates all the others is maintained over generations

Read more

Summary

Introduction

Pn), where pi = 1 indicates the probability of having 1 at position i of a candidate solution Another representative univariate EDA is known as the population-based incremental learning (PBIL) algorithm [20], which uses a similar binary-encoded probability model as in UMDA. While the conditional probability model as given in (3) is only capable of presenting the pair-wise interactions, some problems may contain more complicated interactions between the decision variables To model such complicated interactions, a classic approach is the Bayesian optimization algorithm (BOA) [23] which adopts the Bayesian networks as the mutivariate models. In the Bayesian multi-objective optimization algorithm (BMOA) [29], the selection operator is based on a -archive, where a minimal set of candidate solutions that -dominates all the others is maintained over generations. Since the PS is a piecewise continuous manifold under the Karush–Kuhn–Tucker optimality conditions (aka the regularity property) [34], RM-MEDA reduces the dimensionality of the decision vectors using the local PCA method and samples new candidate solutions in the latent space

Discussion
Summary
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call