Abstract

This paper proposes attention enabled multi-agent deep reinforcement learning (MADRL) framework for active distribution network decentralized Volt-VAR control. Using the unsupervised clustering, the whole distribution system can be decomposed into several sub-networks according to the voltage and reactive power sensitivity relationships. Then, the distributed control problem of each sub-network is modeled as Markov games and solved by the improved MADRL algorithm, where each sub-network is modeled as an adaptive agent. An attention mechanism is developed to help each agent focus on specific information that is mostly related to the reward. All agents are centrally trained offline to learn the optimal coordinated Volt-VAR control strategy and executed in a decentralized manner to make online decisions with only local information. Compared with other distributed control approaches, the proposed method can effectively deal with uncertainties, achieve fast decision makings, and significantly reduce the communication requirements. Comparison results with model-based and other data-driven methods on IEEE 33-bus and 123-bus systems demonstrate the benefits of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.