Abstract

Power distribution systems are continually challenged by extreme climatic events. The reliance of the energy sector on overhead infrastructures for electricity distribution has necessitated a paradigm shift in grid management toward resilience enhancement. Grid hardening strategies are among effective methods for improving resilience. Limited budget and resources, however, demand for optimal planning for hardening strategies. This paper develops a planning framework based on Deep Reinforcement Learning (DRL) to enhance the long-term resilience of distribution systems using hardening strategies. The resilience maximization problem is formulated as a Markov decision process and solved via integration of a novel ranking strategy, neural networks, and reinforcement learning. As opposed to targeting resilience against a single future hazard – a common approach in existing methods – the proposed framework quantifies life-cycle resilience considering the possibility of multiple stochastic events over a system’s life. This development is facilitated by a temporal reliability model that captures the compounding effects of gradual deterioration and hazard effects for stochastic hurricane occurrences. The framework is applied to a large-scale power distribution system with over 7000 poles. Results are compared to an optimal strategy by a mixed-integer nonlinear programming model solved using Branch and Bound (BB), as well as the strength-based strategy by U.S. National Electric Safety Code (NESC). Results indicate that the proposed framework significantly enhances the long-term resilience of the system compared to the NESC strategy by over 30% for a 100-year planning horizon. Furthermore, the DRL-based approach yields optimal solutions for problems that are computationally intractable for the BB algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call