Abstract

In this article, a deep reinforcement learning (RL)-based control approach with enhanced learning efficiency and effectiveness is proposed to address the wind farm control problem. Specifically, a novel composite experience replay (CER) strategy is designed and embedded in the deep deterministic policy gradient (DDPG) algorithm. CER provides a new sampling scheme that can mine the information of stored transitions in-depth by making a tradeoff between rewards and temporal difference (TD) errors. Modified importance-sampling weights are introduced to the training process of neural networks (NNs) to deal with the distribution mismatching problem induced by CER. Then, our CER-DDPG approach is applied to optimizing the total power production of wind farms. The main challenge of this control problem comes from the strong wake effects among wind turbines and the stochastic features of environments, rendering it intractable for conventional control approaches. A reward regularization process is designed along with the CER-DDPG, which employs an additional NN to handle the bias of rewards caused by the stochastic wind speeds. Tests with a dynamic wind farm simulator (WFSim) show that our method achieves higher rewards with less training costs than conventional deep RL-based control approaches, and it has the ability to increase the total power generation of wind farms with different specifications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call