Abstract

Cooperative co-evolution (CC) is widely used to solve large-scale continuous optimization problems, which divides a large-scale problem into several small-scale sub-problems via decomposition methods and then optimizes each sub-problem separately. However, the performance of CC mainly depends on the decomposition methods. A recently proposed bisection-based decomposition method, called recursive differential grouping (RDG), shows good performance when solving large-scale continuous optimization problems. In order to further improve the performance of RDG, this paper develops a novel decomposition method, called three-level recursive differential grouping (TRDG). In TRDG, when the interaction between two sets is detected, the variables in one of the sets are divided into three subsets based on the trichotomy method, and then the interaction between each subset and the other set is detected. Compared with RDG, TRDG can reduce the depth of recursion, thus saving the number of fitness evaluations (FEs). In addition, we devise a novel strategy to update adaptively the threshold for identifying the interactions between variables. The simulation experiment results on CEC’2010 and CEC’2013 benchmark functions show that the performance of TRDG is better than several existing decomposition methods in terms of the accuracy and the number of FEs. Furthermore, TRDG is embedded into two frameworks to tackle CEC’2010 large-scale continuous optimization problems.

Highlights

  • Large-scale continuous optimization problems (LSCOPs) become more and more popular in the practical applications [1]–[3]

  • In order to further improve the performance of RDG, in this paper, we propose a three-level recursive differential grouping, called TRDG, which is a static decomposition method

  • TRDG can reduce the depth of recursion and save the number of fitness evaluations (FEs)

Read more

Summary

INTRODUCTION

Large-scale continuous optimization problems (LSCOPs) become more and more popular in the practical applications [1]–[3]. In RDG, the variables are divided into two subsets of equal size In this way, TRDG can reduce the depth of recursion and save the number of FEs. we design an adaptive threshold strategy to improve the accuracy. The experiments have verified TRDG shows better performance than three compared decomposition methods on two sets of large-scale global optimization benchmark functions. It can be seen that the depth of recursion can be reduced, which will help to reduce FEs. The computational complexity of TRDG and RDG to decompose a D-dimensional problem are both O(Dlog(D)) in terms of the number of FEs. we give a detailed analysis. Both TRDG and RDG method consume about 3(D − m) FEs to detect the (D − m) separable variables. On CEC’2010 benchmark functions, the performance of TRDG method is compared with the performance of RDG, DG2 and GDG methods

METRICS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call