Abstract

Nowadays, economic and environmental concerns in production have become increasingly significant. To address these issues, the Combined Economic and Emission Dispatch (CEED) problem has been introduced to optimize the power generation process by considering fuel cost and emitted substances. However, due to the nonlinearity and nonconvexity of the objective function, the optimization of CEED remains a challenge. In this paper, we develop a Reinforcement Learning-based Adaptive Differential Evolution (RLADE) algorithm to enhance the optimization performance. The mutation strategy and crossover probability of RLADE are optimized using Reinforcement Learning (RL) to respectively ensure better convergence speed and searchability. Additionally, two modifications of RL, namely the adaptive population size-based state division and fitness-ranking-based reward mechanism, are proposed to improve the accuracy of state division and reward calculation in RL. The experiments conducted in this paper consider two objective formulation methods of CEED problems, namely the quadratic and cubic criterion functions. The mean values and standard deviations of the obtained solutions were utilized to assess the performance of RLADE, as well as other comparative algorithms, namely DE algorithm and two RL-based DE variants. The results clearly demonstrate that RLADE surpasses its counterparts with proportion of 100%, 85.7%, and 100% for the 6-unit and 11-unit quadratic CEED problems, as well as cubic criterion functions, in terms of both search accuracy and convergence ability. Furthermore, the significance of RLADE's superiority is confirmed through the Wilcoxon's signed rank test.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call