A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy, eliminating the need for multiple neural networks to independently solve simpler sub-problems. Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains. Curriculum Learning (CL) is harnessed to effectively manage constraints in these versions. PEARL’s performance is first evaluated on classical multi-objective benchmarks. Additionally, it is tested on two practical PWR core Loading Pattern (LP) optimization problems to showcase its real-world applicability. The first problem involves optimizing the Cycle length (LC) and the rod-integrated peaking factor (FΔh) as the primary objectives, while the second problem incorporates the mean average enrichment as an additional objective. Furthermore, PEARL addresses three types of constraints related to boron concentration (Cb), peak pin burnup (Bumax), and peak pin power (Fq). The results are systematically compared against conventional approaches from stochastic optimization. Notably, PEARL, specifically the PEAL-NdS ( ▪ ) variant, efficiently uncovers a Pareto front without necessitating additional efforts from the algorithm designer, as opposed to a single optimization with scaled objectives. It also outperforms the classical approach across multiple performance metrics, including the Hyper-volume. Future works will encompass a sensitivity analysis of hyper-parameters with statistical analysis to optimize the application of PEARL and extend it to more intricate problems.
Read full abstract