Abstract

Many state-of-the-art evolutionary algorithms (EAs) can be categorized as sequential hybrid EAs, in which various EAs are sequentially executed. The timing to switch from one EA to another is critical to the performance of the hybrid EA because the switching time determines the allocation of computational resources and thereby it helps balance exploration and exploitation. In this article, a framework for adaptive parameter control for hybrid EAs is proposed, in which the switching time is controlled by a learned agent rather than a manually designed scheme. First the framework is applied to an adaptive differential evolution algorithm, LSHADE, to control when to use the scheme to reduce the population. Then the framework is applied to the algorithm that won the CEC 2018 competition, i.e., the hybrid sampling evolution strategy (HSES), to control when to switch from the univariate sampling phase to the Covariance Matrix Adaptation Evolution Strategy phase. The agents for parameter control in LSHADE and HSES are trained by using Q-learning and deep Q-learning to obtain the learned algorithms Q-LSHADE and DQ-HSES. The results of experiments on the CEC 2014 and 2018 test suites show that the learned algorithms significantly outperform their counterparts and some state-of-the-art EAs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.