Abstract

Evolving the weights of learning networks through evolutionary computation (neuroevolution) has proven scalable over a range of challenging Reinforcement Learning (RL) control tasks. However, similar to most black-box optimization problems, existing neuroevolution approaches require an additional adaptation process to effectively balance exploration and exploitation through the selection of sensitive hyper-parameters throughout the evolution process. Therefore, these methods are often plagued by the computation complexities of such adaptation processes which often rely on a number of sophisticatedly formulated strategy parameters. In this paper, Evolution Strategy (ES) with a simple yet efficient ensemble of mutation strategies is proposed. Specifically, two distinct mutation strategies coexist throughout the evolution process where each strategy is associated with its own population subset. Consequently, elites for generating a population of offspring are realized by co-evaluation of the combined population. Experiments on testbed of six (6) black-box optimization problems which are generated using a classical control problem and six (6) proven continuous RL agents demonstrate the efficiency of the proposed method in terms of faster convergence and scalability than the canonical ES. Furthermore, the proposed Adaptive Ensemble ES (AEES) shows an average of 5 - 10000x and 10 - 100x better sample complexity in low and high dimension problems, respectively than their associated base DRL agents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call