Abstract

The complexity of modern power grids keeps increasing due to the expansion of renewable energy resources and the requirement of fast demand responses, which results in a great challenge for conventional power grid control systems. Existing autonomous control approaches for the power grid requires an accurate system model and a powerful computational platform, which is difficult to scale up for the large-scale energy system with more control options and operating conditions. Facing these challenges, this article proposes a data-driven multi-agent power grid control scheme using a deep reinforcement learning (DRL) method. Specifically, the classic autonomous voltage control (AVC) problem is taken as an example and formulated as a Markov Game with a heuristic method to partition agents. Then, a multi-agent AVC (MA-AVC) algorithm based on a multi-agent deep deterministic policy gradient (MADDPG) method that features centralized training and decentralized execution is developed to solve the AVC problem. The proposed method can learn from scratch and gradually master the system operation rules by input and output data. In order to demonstrate the effectiveness of the proposed MA-AVC algorithm, comprehensive case studies are conducted on an Illinois 200-Bus system considering load/generation changes, N-1 contingencies, and weak centralized communication environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.