Abstract

When assessing the risk of cascading outages of power system, a common method is to search the fault chains with risk continuously through cascading outage simulation. However, in large systems, large amount of fault chains bring difficulties to risk assessment. In this paper, a cascading outage risk assessment strategy based on deep reinforcement learning is proposed. This method aims to improve the efficiency of risk assessment by using an agent to guide the search of high risk fault chains. A cascading outage is abstracted into a two-process Markov process, and a risk indicator considering outage probability is proposed, then a tree search framework based on Markov decision process and DQN is constructed. Compared with the classical Q-learning method, the deep Q network avoids "dimensional disaster" and has potential of large system application. The feasibility and accuracy of the method were verified by comparing with the classical Q-learning results on the IEEE 9-Bus System, and the simulation was carried out on the IEEE 39-Bus New England System to further demonstrate the efficiency of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call