Abstract

Volt-VAR control (VVC) plays an important role in enhancing energy efficiency, power quality, and reliability of electric power distribution systems by coordinating the operations of equipment such as voltage regulators, on-load tap changers, and capacitor banks. VVC not only keeps voltages in the distribution system within desirable ranges but also reduces system operation costs, which include network losses and equipment depreciation from wear and tear. In this paper, the deep reinforcement learning approach is taken to learn a VVC policy, which minimizes the total operation costs while satisfying the physical operation constraints. The VVC problem is formulated as a constrained Markov decision process and solved by two policy gradient methods, trust region policy optimization and constrained policy optimization. Numerical study results based on IEEE 4-bus and 13-bus distribution test feeders show that the policy gradient methods are capable of learning near-optimal solutions and determining control actions much faster than the optimization-based approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.