Abstract

Residential load forecasting is of great significance to improve the energy efficiency of smart home services. Deep-learning techniques, i.e., long short-term memory (LSTM) neural networks, can considerably improve the performance of prediction models. However, these black-box networks are generally unexplainable, which creates an obstacle for the customer to deeply understand forecasting results and rapidly respond to uncertain circumstances, as practical engineering requires a high standard of prediction reliability. In this paper, an interpretable deep-learning method is proposed to solve the multi-step residential load forecasting problem which is referred to as explainable artificial intelligence (XAI). An encoder–decoder network architecture based on multi-variable LSTM (MV-LSTM) is developed for the multi-step probabilistic-load forecasting. The mixture attention mechanism is introduced in each prediction time step to better capture the different temporal dynamics of multivariate sequence in an interpretable form. By evaluating the contribution of each variable to the forecast, multi-quantile forecasts at multiple future time steps can be generated. The experiments on the real data set show that the proposed method can achieve good prediction performance while providing valuable explanations for the prediction results. The findings help end users gain insights into the forecasting model, bridging the gap between them and advanced deep-learning techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call