Abstract
Since estimation of distribution algorithms (EDAs) were proposed, many attempts have been made to improve EDAs' performance in the context of global optimization. So far, the studies or applications of multivariate probabilistic model-based EDAs in continuous domain are still mostly restricted to low-dimensional problems. Traditional EDAs have difficulties in solving higher dimensional problems because of the curse of dimensionality and rapidly increasing computational costs. However, scaling up continuous EDAs for large-scale optimization is still necessary, which is supported by the distinctive feature of EDAs: because a probabilistic model is explicitly estimated, from the learned model one can discover useful properties of the problem. Besides obtaining a good solution, understanding of the problem structure can be of great benefit, especially for black box optimization. We propose a novel EDA framework with model complexity control (EDA-MCC) to scale up continuous EDAs. By employing weakly dependent variable identification and subspace modeling, EDA-MCC shows significantly better performance than traditional EDAs on high-dimensional problems. Moreover, the computational cost and the requirement of large population sizes can be reduced in EDA-MCC. In addition to being able to find a good solution, EDA-MCC can also provide useful problem structure characterizations. EDA-MCC is the first successful instance of multivariate model-based EDAs that can be effectively applied to a general class of up to 500-D problems. It also outperforms some newly developed algorithms designed specifically for large-scale optimization. In order to understand the strengths and weaknesses of EDA-MCC, we have carried out extensive computational studies. Our results have revealed when EDA-MCC is likely to outperform others and on what kind of benchmark functions.
Highlights
E STIMATION of Distribution Algorithms (EDA) [1], [2] have been intensively studied in the context of global optimization
Since previous results (e.g. [6]) have shown that, a) Gaussian models suffer less from the curse of dimensionality than histogram models, which is reasonable because Gaussian models usually have much less degrees of freedom, and b) single Gaussian models have less degrees of freedom than Gaussian mixture models, we focus on using single multivariate Gaussian models to scale up EDA
In this paper we first analyze the difficulties of continuous EDAs in high dimensional search space
Summary
E STIMATION of Distribution Algorithms (EDA) [1], [2] have been intensively studied in the context of global optimization. EDA explicitly builds a probabilistic model of promising solutions in a search space. New solutions are sampled from the model which presents extracted global statistical information from the search space. EDA uses the model as guidance of reproduction to find better solutions. In traditional EAs, the underlying model is usually implicitly expressed through evolutionary operators. Once the model is explicitly presented, the algorithm can be classified as an instance of EDA. Research on EDAs has been extended from discrete domain to continuous optimization and much progress has been made. We focus EDAs in single objective continuous optimization domain
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.