Abstract

Artificial intelligence (AI) technology has become an important trend to support the analysis and control of complex and time-varying power systems. Although deep reinforcement learning (DRL) has been utilized in the power system field, most of these DRL models are regarded as black boxes, which are difficult to explain and cannot be used on occasions when human operators need to participate. Using the explainable AI (XAI) technology to explain why power system models make certain decisions is as important as the accuracy of the decisions themselves because it ensures trust and transparency in the model decision-making process. The interpretability issue in DRL models in power system emergency control is discussed in this article. The proposed interpretable method is a backpropagation deep explainer based on Shapley additive explanations (SHAPs), which is named the Deep-SHAP method. The Deep-SHAP method is adopted to provide a reasonable interpretable model for a DRL-based emergency control application. For the DRL model, the importance of input features has been quantified to obtain contributions for the outcome of the model. Further, feature classification of the inputs and probabilistic analysis of the outputs in the XAI model is added to interpretability results for better clarity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call