Abstract

Proven as an efficient population-based optimization algorithm, Covariance Matrix Adaptation Evolution Strategy (CMA-ES) features two evolution paths, one to update the covariance matrix and the other to adapt its mutation strength. Considering the time and space complexity of CMA-ES, there are several attempts in the literature to realize a single-path algorithm. However, such attempts require altering the original structure of CMA-ES and consequently eliminating some vital features crucial to the overall algorithm performance. In this paper, we show that the two evolution paths of CMA-ES are highly correlated and one can be expressed in terms of the other thus reducing the computational cost of the algorithm while preserving the original algorithmic framework. Based on experimental studies conducted using 30 functions from the IEEE CEC 2014 benchmark suite, the proposed algorithm shows comparable results with the standard CMA-ES as well as five other state-of-the-art CMA-ES variants. Furthermore, it is shown that the proposed algorithm can be applied to policy search in Deep Reinforcement Learning (DRL). Performance results based on selected DRL problems from different application domains prove the efficiency of the proposed algorithm compared to other population-based algorithms often applied for policy search in DRL.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.