Abstract

This paper proposes a novel model-free/data-driven centralized training and decentralized execution multi-agent deep reinforcement learning (MADRL) framework for distribution system voltage control with high penetration of PVs. The proposed MADRL can coordinate both the real and reactive power control of PVs with existing static var compensators and battery storage systems. Unlike the existing DRL-based voltage control methods, our proposed method does not rely on a system model during both the training and execution stages. This is achieved by developing a new interaction scheme between the surrogate modeling of the original system and the multi-agent soft actor critic (MASAC) MADRL algorithm. In particular, the sparse pseudo-Gaussian process with a few-shots of measurements is utilized to construct the surrogate model of the original environment, i.e., power flow model. This is a data-driven process and no model parameters are needed. Furthermore, the MASAC enabled MADRL allows to achieve better scalability by dividing the original system into different voltage control regions with the aid of real and reactive power sensitivities to voltage, where each region is treated as an agent. This also serves as the foundation for the centralized training and decentralized execution, thus significantly reducing the communication requirements as only local measurements are required for control. Comparative results with other alternatives on the IEEE 123-nodes and 342-nodes systems demonstrate the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call