Abstract

More and more extreme weather events now occur due to global warming, and the existing power systems are powerless to cope with such high-impact, low-probability events. In order to tackle such highly stochastic events with large state spaces during an extreme event, conventional model-based approaches establish optimization models to capture the impact on the power system. However, the sophisticated models cause high computational complexity and do not have the ability to learn. This paper formulates the distributed generator (DG) rescheduling problem as a discrete Markov decision process (MDP) without transition probability information. The uncertainties associated with component failures are considered. A model-free optimization framework based on deep reinforcement learning (DRL) is proposed to determine the optimal rescheduling strategy to improve the resilience of a distribution system. Deep neural networks (DNN) are applied to extract large stochastic state space features and form multivariate Gaussian distributions to handle high-dimensional continuous control actions. A conventional reinforcement learning (RL) algorithm and two DRL algorithms are tested and compared in the IEEE 9-bus, IEEE 39-bus and IEEE 123-bus systems, and the results illustrate the effectiveness of the proposed framework in both meshed and radial topologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call