Abstract

The twist angle distribution (TAD) of a wind turbine blade determines its efficiency in terms of electricity production. As the blade is normally deployed in dynamic wind environments where wind speed varies widely, it is crucial to search the optimal TAD for different wind speeds in the design process. Due to the need to call a large number of complex simulators, traditional methods for optimal TAD searching are inefficient and time-consuming, which hinders the blade design process. Hence, this work presents a reinforcement learning-based method for searching optimal TAD efficiently, which is named RL-TAD. The fundamental idea of RL-TAD is to learn the TAD searching policy by an agent and reuse this agent to search optimal TAD under different wind speeds. This idea divided RL-TAD from the commonly used genetic algorithm-based methods, which only output the TAD and ignore reusing the searching policy. The RL-TAD includes the offline stage and online stage. In the offline stage, the RL-TAD learns the policy by reinforcement learning, in which the environment is constructed by a surrogate model, and a new reward policy is developed by integrating design experiences. Then in the online stage, the trained agent can be deployed to search TAD for different wind speeds. To verify the proposed RL-TAD, a case study is detailed. The empirical results show that the RL-TAD can converge to the optimal TAD at a speed of 3–5 times faster than the genetic algorithm-based method in the offline stage and also a better wind energy power coefficient is obtained. Besides, the response time is less than 0.1 s when using the trained agent to search the TAD, which proofs its potential to be used in rapid optimal TAD searching. Further, the rapid optimal TAD searching ability can support the real-time control of the wind turbine blade.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call