Abstract

With the recent rapid uptake of photovoltaic (PV) resources, the overvoltage condition during low load periods or intermittent renewable generation is one of the most critical challenges for electricity distribution networks. Traditional model-based local or centralized control methods that rely on proper system parameters are difficult to mitigate rapid changes in power systems. Utilizing model-free/data-driven multiagent deep reinforcement learning (MADRL) has been recognized as an effective solution to active voltage control. However, existing MADRL-based control approaches trained solely on data are agnostic to the underlying real-world physics principles. Therefore, this paper aims to incorporate physical knowledge of regional distribution networks into MADRL’s decision-making. The main contributions of this paper are summarized as follows. First, a novel physics-informed MADRL -based distributed voltage control method is proposed, which is still under a centralized training and distributed execution framework and only requires local measurements. Second, graph neural networks are employed to help MADRL agents learn graph knowledge (node features and topological information). Further, transformer is introduced to extract discriminative representations and ensure agents’ cooperative control. Third, we adopt a physics-guided architecture of neural networks in the actor network to stabilize the training process and improve sample efficiency. Finally, simulation results based upon modified IEEE 33-bus and 141-bus networks validate the proposed method's effectiveness, robustness and computation efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call