Abstract

The energy sector plays an important role in socioeconomic and environmental development. Accurately forecasting energy demand across various time horizons can yield substantial advantages, such as better planning and management of energy resources. Different methodologies, including mathematical, statistical, and machine learning models, have been proposed for energy consumption prediction. Nevertheless, some studies claim that deep learning models can outperform other approaches when dealing with time series data characterized by a high level of granularity. There are different implementations of deep learning models, such as recurrent unit layers, dimensional convolutions, or Transformers. Hence, it is interesting to compare the performance of different architectural types using a methodology that utilizes feature selection and model interpretation to underscore the significance of meteorological features and timestamps in the forecasting task. This study presents a comparative methodology of forecasting approaches employing recurrence-based models: Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Time Series Transformer (TST), while exploring the relationship between model performance and resource allocation related with amount of features and time resolution. The findings reveal that the Transformer architecture works better with fewer samples, despite the occurrence of overfitting. In addition, training the models enabled the creation of a voting ensemble, optimized by the Simulated Annealing metaheuristic, resulting in a 23% improvement in hourly granularity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call