Abstract

High penetration of distributed energy storage systems (ESS) offers an unparalleled opportunity to reinforce the distribution grid at the local level against upstream disruptions; however, their mass operation under uncertainty of load and renewable generation is computationally expensive. While deep reinforcement learning (DRL) has been suggested to train operator agents capable of handling uncertainty and high dimensionality of the problem, it falls short when safety and feasibility assurances are required in critical operations. This paper proposes a model for hierarchical coupling of DRL and mathematical optimization for operation of ESS in distribution grids, in order to take advantage of DRL fast response while keeping network constraints in check. In the proposed method, strategic scheduling of distributed ESS units are performed locally by fast DRL-trained agents, while critical grid-wide operations such as fault management and voltage control are performed by an optimization-based central controller. The local controller is trained by Twin Delayed Deep Deterministic Policy Gradient (TD3), whose response time is three orders of magnitude faster than stochastic optimization, while the optimality of solutions are similar in both cases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.