Abstract

This paper focuses on the critical load restoration problem in distribution systems following major outages. To provide fast online response and optimal sequential decision-making support, a reinforcement learning (RL) based approach is proposed to optimize the restoration. Due to the complexities stemming from the large policy search space, renewable uncertainty, and nonlinearity in a complex grid control problem, directly applying RL algorithms to train a satisfactory policy requires extensive tuning to be successful. To address this challenge, this paper leverages the curriculum learning (CL) technique to design a training curriculum involving a simpler steppingstone problem that guides the RL agent to learn to solve the original hard problem in a progressive and more effective manner. We demonstrate that compared with direct learning, CL facilitates controller training to achieve better performance. To study realistic scenarios where renewable forecasts used for decision-making are in general imperfect, the experiments compare the trained RL controllers against two model predictive controllers (MPCs) using renewable forecasts with different error levels and observe how these controllers can hedge against the uncertainty. Results show that RL controllers are less susceptible to forecast errors than the baseline MPCs and can provide a more reliable restoration process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.