Abstract

Electrical load forecasting of buildings is crucial in designing an energy operation strategy for smart city realization. Although artificial intelligence techniques have demonstrated excellent energy forecasting performance, explaining how outcomes are obtained when they are inaccurate is challenging. Explainable artificial intelligence (XAI) has recently received considerable attention in addressing this issue. This study proposes an explainable electrical load forecasting (XELF) methodology. We first preprocess data for input variable configuration and build the following tree-based ensemble models that produce outstanding results in tabular data: random forest, gradient boosting machine (GBM), extreme gradient boosting, light GBM (LightGBM), and categorical boosting. We evaluate performance in terms of the mean absolute percentage error, coefficient of variation of the root mean square error, and normalized mean absolute error. Finally, we provide the rationale for interpreting the influence of the input variables and decision-making process via Shapley additive explanations (SHAP), an XAI technique, on the best model. The experiments were conducted with an electrical load dataset from educational buildings to validate the practicality and validity of this methodology. We applied the SHAP to the LightGBM model and performed its respective analyses and visualizations for XELF.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call