Abstract

The performance of decomposition-based algorithms is sensitive to the Pareto front shapes since their reference vectors preset in advance are not always adaptable to various problem characteristics with no a priori knowledge. For this issue, this article proposes an adaptive reference vector reinforcement learning (RVRL) approach to decomposition-based algorithms for industrial copper burdening optimization. The proposed approach involves two main operations, that is: 1) a reinforcement learning (RL) operation and 2) a reference point sampling operation. Given the fact that the states of reference vectors interact with the landscape environment (quite often), the RL operation treats the reference vector adaption process as an RL task, where each reference vector learns from the environmental feedback and selects optimal actions for gradually fitting the problem characteristics. Accordingly, the reference point sampling operation uses estimation-of-distribution learning models to sample new reference points. Finally, the resultant algorithm is applied to handle the proposed industrial copper burdening problem. For this problem, an adaptive penalty function and a soft constraint-based relaxing approach are used to handle complex constraints. Experimental results on both benchmark problems and real-world instances verify the competitiveness and effectiveness of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call