Abstract
Volt/Var control (VVC) is a crucial function in power distribution systems to minimize power loss and maintain voltages within allowable limits. However, incomplete and inaccurate information about the distribution network makes model-based VVC methods difficult to implement in practice. In this paper, we propose a novel multi-agent graph-based deep reinforcement learning (DRL) algorithm named MASAC-HGRN to address the VVC problem under partial observation constraints. Our proposed algorithm divides the power distribution system into several regions, each region treated as an agent. Unlike traditional model-based or global-observation-based DRL methods, our proposed method leverages a practical decentralized training and decentralized execution (DTDE) paradigm to address the partial observation constraints. The well-trained agents gather information only from their interconnected neighbors and realize decentralized local control. Numerical studies with IEEE 33-bus and 123-bus distribution test feeders demonstrate that our proposed MASAC-HGRN algorithm outperforms the state-of-art RL algorithms and traditional model-based approaches in terms of VVC performance. Moreover, the DTDE framework exhibits flexibility and robustness in extensive robustness experiments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Electrical Power & Energy Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.