Emergency control is essential for maintaining the stability of power systems, serving as a key defense mechanism against the destabilization and cascading failures triggered by faults. Under-voltage load shedding is a popular and effective approach for emergency control. However, with the increasing complexity and scale of power systems and the rise in uncertainty factors, traditional approaches struggle with computation speed, accuracy, and scalability issues. Deep reinforcement learning holds significant potential for the power system decision-making problems. However, existing deep reinforcement learning algorithms have limitations in effectively leveraging diverse operational features, which affects the reliability and efficiency of emergency control strategies. This paper presents an innovative approach for real-time emergency voltage control strategies for transient stability enhancement through the integration of edge-graph convolutional networks with reinforcement learning. This method transforms the traditional emergency control optimization problem into a sequential decision-making process. By utilizing the edge-graph convolutional neural network, it efficiently extracts critical information on the correlation between the power system operation status and node branch information, as well as the uncertainty factors involved. Moreover, the clipped double Q-learning, delayed policy update, and target policy smoothing are introduced to effectively solve the issues of overestimation and abnormal sensitivity to hyperparameters in the deep deterministic policy gradient algorithm. The effectiveness of the proposed method in emergency control decision-making is verified by the IEEE 39-bus system and the IEEE 118-bus system.
Read full abstract