Abstract

The application of Artificial Intelligence (AI) and Machine Learning (ML) models is increasingly leveraged to automate and optimize Data Centre (DC) operations. However, the interpretability and transparency of these complex models pose critical challenges. Hence, this paper explores the Shapley Additive exPlanations (SHAP) values model explainability method for addressing and enhancing the critical interpretability and transparency challenges of predictive maintenance models. This method computes and assigns Shapley values for each feature, then quantifies and assesses their impact on the model’s output. By quantifying the contribution of each feature, SHAP values can assist DC operators in understanding the underlying reasoning behind the model’s output in order to make proactive decisions. As DC operations are dynamically changing, we additionally investigate how SHAP can capture the temporal behaviors of feature importance in the dynamic DC environment over time. We validate our approach with selected predictive models using an actual dataset from a High-Performance Computing (HPC) DC sourced from the Enea CRESCO6 cluster in Italy. The experimental analyses are formalized using summary, waterfall, force, and dependency explanations. We delve into temporal feature importance analysis to capture the features’ impact on model output over time. The results demonstrate that model explainability can improve model transparency and facilitate collaboration between DC operators and AI systems, which can enhance the operational efficiency and reliability of DCs by providing a quantitative assessment of each feature’s impact on the model’s output.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call