Abstract

Emergency frequency control is one of the most critical approaches to maintain power system stability after major disturbances. With the increasing number of grid-connected renewable energy sources, existing model-based methods of frequency control are facing up with challenges of computational speed and scalability for large-scale systems. In this paper, the emergency frequency control problem is formulated as a Markov Decision Process and solved through a novel distributional deep reinforcement learning (DRL) method, namely the distributional soft actor critic (DSAC) method. Compared with other reinforcement learning methods that only estimate the mean value, the proposed DSAC model estimates the distribution of value function over returns. This advancement can lead to more insights and knowledge for the agent, with the benefit of a much faster and more stable learning process, and the improved frequency control performance. The simulation results on IEEE 39-bus and IEEE 118-bus systems demonstrate the effectiveness and robustness of proposed models, as well as the advantage compared to other state-of-the-art DRL algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call