Abstract

Active distribution network is encountering serious voltage violations associated with the proliferation of distributed photovoltaic. Cutting-edge research has confirmed that voltage regulation techniques based on deep reinforcement learning manifest superior performance in addressing this issue. However, such techniques are typically applied to the specifically fixed network topologies and have insufficient learning efficiency. To address these challenges, a novel edge intelligence, featured by a multi-agent deep reinforcement learning algorithm with graph attention network and physical-assisted mechanism, is proposed. This novel method is unique in that it includes the graph attention network into reinforcement learning to capture spatial correlations and topological linkages among nodes, allowing agents to be “aware” of topology variations caused by reconfiguration real time. Furthermore, employing a relatively exact physical model to generate reference experiences and storing them in a replay buffer enables agents to identify effective actions faster during training and thus, greatly enhances the efficiency of learning voltage regulation laws. All agents are trained centralized to learn a coordinated voltage regulation strategy, which is then executed decentralized based solely on local observation for fast response. The proposed methodology is evaluated on the IEEE 33-node and 136-node systems, and it outperforms the previously implemented approaches in convergence and control performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call