Abstract

This paper proposes a stochastic optimal relaxed control methodology based on reinforcement learning (RL) for solving the automatic generation control (AGC) under NERC's control performance standards (CPS). The multi-step Q(λ) learning algorithm is introduced to effectively tackle the long time-delay control loop for AGC thermal plants in non-Markov environment. The moving averages of CPS1/ACE are adopted as the state feedback input, and the CPS control and relaxed control objectives are formulated as multi-criteria reward function via linear weighted aggregate method. This optimal AGC strategy provides a customized platform for interactive self-learning rules to maximize the long-run discounted reward. Statistical experiments show that the RL theory based Q(λ) controllers can effectively enhance the robustness and dynamic performance of AGC systems, and reduce the number of pulses and pulse reversals while the CPS compliances are ensured. The novel AGC scheme also provides a convenient way of controlling the degree of CPS compliance and relaxation by online tuning relaxation factors to implement the desirable relaxed control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call