Abstract
Decomposing the large-scale problem into small-scale subproblems and optimizing them cooperatively are critical steps for solving large-scale optimization problem. This article proposes a cooperative differential evolution with utility-based adaptive grouping. The problem decomposition is adaptively executed by the two mechanisms of circular sliding controller and relation matrix, which consider the variable interactions on the basis of the short-term and long-term utilities, respectively. The circular sliding controller provides baselines for the subproblem optimizer. The size of the sliding window and the sliding speed in the controller are adjusted adaptively so that the variables with higher activeness can be optimized extensively. The relation matrix–based grouping strategy enables interacted variables to be grouped into the same subproblem with higher probabilities. The novelty is that decomposition is conducted as the optimization process without extra computational burden. For subproblem optimization, we use a self-adaptive differential evolution operator that adaptively adjusts the parameters to guide the search to the optimum solutions of the subproblems. Experiments on the benchmarks of CEC2008 and CEC2010, and practical problems show the effectiveness of the proposed algorithm.
Highlights
High-dimensional optimization problems can be found in many engineering fields, so effective and efficient optimization algorithms are always in demand.[1,2] Evolutionary computation has emerged as an intelligent computing discipline for optimization problems; it contains a large number of heuristics that have strong robustness and global search ability and do not require domain knowledge
Especially large-scale optimization problems, it is essential to decompose the problem into small-scale subproblems and to optimize the subproblems cooperatively
The circular sliding window slides across different regions of the variable blocks and provides baselines for the subproblem optimizer
Summary
High-dimensional optimization problems can be found in many engineering fields, so effective and efficient optimization algorithms are always in demand.[1,2] Evolutionary computation has emerged as an intelligent computing discipline for optimization problems; it contains a large number of heuristics that have strong robustness and global search ability and do not require domain knowledge. Since different subsets of the variables have different surface landscapes and require different computational efforts, the size of the window and the sliding speed are self-adapted. The active and regular regions imply that the variables have probabilities being interacted, and the sliding speed will be slow when the window covers them. The inactive regions imply that the variables are not interacted, and the sliding speed will be fast when the windows cover them. In order that they are optimized with latent interacted variables, the permutations of the variables belonging to different inactive regions are rearranged for the cycle. For optimization of the subproblems, that is, the subsets of variables covered by the window, a self-adaptive differential evolution (SaDE) operator is adopted. The ‘‘Experimental studies’’ section presents the experimental studies, and the ‘‘Conclusion’’ section concludes this article
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have