With the growing proportion of renewable energy sources in power grid, the demands for deep peak shaving and rapid load regulation of ultra-supercritical (USC) unit to provide more flexible and stable power services is further enhanced. Simultaneously, the load tracking speed is no longer the only significant task in the flexible operation of USC unit, the operational economy is more and more important. To this end, a hierarchical reinforcement learning GPC control scheme with a better economy is proposed and apply to USC unit. Firstly, the model mismatch problem in control process is taken into account, and an integral mode is added into the GPC to eliminate the steady state error during the load regulation. Secondly, the hierarchical reinforcement learning GPC scheme integrated the GPC and twin delayed deep deterministic policy gradient algorithms is presented. The complex flexibility demand problem is decomposed into rapid load regulation and economic operation and then respectively solved in the GPC-based upper layer and reinforcement learning agent-based lower layer. Thirdly, under the flexibility demand, a multi-criteria optimization function including the load tracking cost, main steam throttling loss and coal consumption rate of unit is constructed. Then, via the accurate trajectory tracking by GPC and on-line control sequence optimization by reinforcement learning agent, the operational flexibility and economy of USC unit are simultaneously improved within safe boundaries. Finally, to verify the availability of the proposed hierarchical reinforcement learning GPC strategy, large-scale load variations and disturbance rejection ability tests cover the 30%–100 % rated load of a 1000 MW USC unit are carried out, and the load regulation performance is compared to other strategies. Results confirm that the designed hierarchical reinforcement learning GPC control scheme is profit for the flexible and economic operation capability promotion of USC unit.
Read full abstract