Abstract

Dynamic distribution network reconfiguration (DNR) algorithms perform hourly status changes of remotely controllable switches to improve distribution system performance. The problem is typically solved by physical model-based control algorithms, which not only rely on accurate network parameters but also lack scalability. To address these limitations, this paper develops a data-driven batch-constrained reinforcement learning (RL) algorithm for the dynamic DNR problem. The proposed RL algorithm learns the network reconfiguration control policy from a finite historical operational dataset without interacting with the distribution network. The numerical study results on three distribution networks show that the proposed algorithm not only outperforms state-of-the-art RL algorithms but also improves the behavior control policy, which generated the historical operational data. The proposed algorithm is also very scalable and can find a desirable network reconfiguration solution in real-time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.