Abstract

Wind speed forecasting (WSF) is a viable option for increasing energy consumption efficiency. Previous forecasting methods rely on global accuracy, and the performance of these models changes with each time step due to local variations in wind characteristics, which is not ideal. Considering this problem, a novel dynamic selection of the best model (DSM) approach using reinforcement learning (RL) is proposed based on-policy state action reward state action (SARSA) for improved wind speed forecasting. DSM is defined as an RL problem and solved with an on-policy SARSA agent. The proposed approach is divided into a forecasting pool of models (FPM) and a learning agent, respectively. FPM comprises five robust forecasting approaches that have been trained and tuned. These models perform the WSF individually, and the SARSA agent is developed to perform the DSM for each step. The proposed approach is evaluated for 1 h ahead (1HA) WSF using two real-time wind speed datasets from Garden City, Manhattan, and Idalia, Colorado. This study provides a thorough examination of the proposed approach performance with an off-policy Q-learning algorithm for the DSM (QL-DSM). Compared to FPM’s models, the proposed SARSA-DSM approach enhanced prediction accuracy by 24.27% and 39.73% in two case studies. The proposed approach also improves 14.57% and 30.25% over the QL-DSM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call