Abstract

To improve the effectiveness of surrogate-assisted evolutionary algorithms (SAEAs) in solving high-dimensional expensive optimization problems with multi-polar and multi-variable coupling properties, a new approach called DRBM-ASRL is proposed. This approach leverages restricted Boltzmann machines (RBMs) for feature learning and reinforcement learning for adaptive strategy selection. DRBM-ASRL integrates four search strategies based on three heterogeneous surrogate modeling approaches, each catering to different preferences. Two of these strategies focus on generative sampling in the subspaces with varying dimensions, while the other two aim to explore the local and global landscapes in the high-dimensional source space. This allows for more effective tradeoffs between exploration and exploitation in the solution space. Reinforcement learning is employed to adaptively prioritize the search strategies during optimization , based on the online feedback information from the optimal solution. In addition, to enhance the representation of potentially optimal samples in the solution space, two task-driven RBMs are separately trained to construct a feature subspace and reconstruct the features of the source space. DRBM-ASRL has been evaluated on various high-dimensional benchmarks ranging from 50 to 200 dimensions, as well as 14 CEC 2013 complex benchmark problems with 100 dimensions and a power system problem with 118 dimensions. Experimental results demonstrate its superior convergence performance and optimization efficiency compared to eight state-of-the-art SAEAs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call