Abstract
Volt-VAR control (VVC) is a critical application in active distribution network management system to reduce network losses and improve voltage profile. To remove dependency on inaccurate and incomplete network models and enhance resiliency against communication or controller failure, we propose consensus multi-agent deep reinforcement learning algorithm to solve the VVC problem, which determines the operation schedules for voltage regulators, on-load tap changers, and capacitors. The VVC problem is formulated as a networked multi-agent Markov decision process, which is solved using the maximum entropy reinforcement learning framework and a novel communication-efficient consensus strategy. The proposed algorithm allows individual agents to learn a group control policy using local rewards. Numerical studies on IEEE distribution test feeders show that our proposed algorithm matches the performance of single-agent reinforcement learning benchmark. In addition, the proposed algorithm is shown to be communication efficient and resilient.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.