Abstract

Estimation of Distribution Algorithms (EDAs) (Muhlenbein et al., 1996; Muhlenbein & PaaB, 1996) are a promising area of research in evolutionary computation. EDAs propose to create models that can capture the dependencies among the decision variables. The widely known Genetic Algorithm could benefit from the available dependencies if the building blocks of the solution were correlated. However, it was proved that the building blocks of a genetic algorithm have a limited capacity for discovering and using complex relationships (correlations) among variables. EDAs instead, focus on learning probability distributions which serve as the vehicle to capture the data dependencies and the data structure as well. In order to show how the proposed method unifies the theory for infinite sized population with the finite sized population case of practical EDAs, we explain them first. An EDA with infinite sized population would perform the steps shown in the algorithm in Table 1.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call