Abstract
Volt–VAR control (VVC) is essential in maintaining voltage stability and operational efficiency in distribution networks, particularly with the increasing integration of distributed energy resources. Traditional methods often struggle to manage real-time fluctuations in demand and generation. First, various resources such as static VAR compensators, photovoltaic systems, and demand response strategies are incorporated into the VVC scheme to enhance voltage regulation. Then, the VVC scheme is formulated as a constrained Markov decision process. Next, a safe deep reinforcement learning (SDRL) algorithm is proposed, incorporating a novel Lagrange multiplier update mechanism to ensure that the control policies adhere to safety constraints during the learning process. Finally, extensive simulations with the IEEE-33 test feeder demonstrate that the proposed SDRL-based VVC approach effectively improves voltage regulation and reduces power losses.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have