Abstract

The application of artificial intelligence (AI) system is more and more extensive, using the explainable AI (XAI) technology to explain why machine learning (ML) models make certain predictions as important as the accuracy of the predictions, because it ensures the trust and transparency in the model decision-making process. For deep reinforcement learning (DRL) model, although some outstanding progress based on DRL has been made in many fields, it is difficult to explain and cannot be used in safety related occasions. Especially in power system, for the power system emergency control based on DRL, how to provide an intuitive and reliable XAI technology is urgent and necessary. The Shapley additive explanations (SHAP) method has been adopted to provide a reasonable interpretable model for an open-source platform named Reinforcement Learning for Grid Control (RLGC). Through a series of summary plots, force plots and probability of SHAP value, the under-voltage load shedding of power system based on DRL can be interpreted much easier and clearer. More importantly, this work is unique in the power system field, presenting the first use of the SHAP method and the probability of SHAP value to give explanations for emergency control based on DRL in power system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call