Abstract

In case of faults or severe disturbances, the power system will enter an emergency operation state. After the system instability is detected, oscillation and blackout will occur in the system if effective control measures are not taken in time. Generator tripping control (GTC) is the most effective emergency control measure. In view of the mismatch between the traditional GTC algorithm and the transient stability assessment method based on machine learning, a new real-time GTC method is needed. In this paper, a three-part control framework is designed for the GTC problem. The control agent is endowed with decision-making ability by interacting with the simulation environment in the offline pre-learning part. Then the trained agent is transplanted to the online application which can help system operators make decisions. Meanwhile, the agent is updated with real data to be better adapted to the actual system in the online learning part. A deep reinforcement learning algorithm, deep deterministic policy gradient (DDPG) is employed to train the control agent in this framework. A modified DDPG algorithm and the corresponding reward function are designed for the GTC problem. Convolution neural network (CNN) is added to the DDPG network, by which the training time of the agent is shortened and the generalization ability of the algorithm is improved. Trained with simulation data and real system experience, the control agent can determine control strategies timely according to the system operating conditions. Simulation results on the IEEE-39 bus system and the realistic regional power system of Eastern China show the effectiveness, generalizability, and timeliness of the decision algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call