Abstract

The dynamics and randomness of the active distribution network (ADN) bring many difficulties to optimal power flow (OPF). In this paper, we propose a deep reinforcement learning method to solve the dynamic optimal power flow (DOPF) of ADN including photovoltaic (PV) power and energy storage systems (ESS). In the distributed generations (DG) modeling of ADN, PV and ESS are treated as PQ nodes, constant power factor control is adopted in the reactive power management of PV to enhance the universality of the model. Different from the traditional methods, the OPF covering multiple periods is described as a Markov process, which takes the reactive output of PV and the active and reactive output of ESS as the actions. Considering the continuity of actions, the deep deterministic policy gradient (DDPG) algorithm is introduced to train the agent to obtain the actions at each time of the day. Finally, the method in this paper is verified in an improved IEEE 33-bus distribution network with real data of PV power and loads. The experimental results show that the proposed method can adjust the decision-making of the agent according to the real-time active output of DG and loads, to minimize the total active power network loss in a day.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.