Abstract

Deep Reinforcement Learning (DRL) algorithms have been applied to extract maximum power from photovoltaic (PV) modules under a variety of environmental conditions. However, it is difficult for a human to explain how a DRL-based maximum power point tracking (MPPT) controller works as it consists of Neural Networks (NNs) that are generally complex and non-linear. Various Explainable Artificial Intelligence (XAI) techniques have been proposed to interpret NNs in power system applications, but MPPT controllers have yet to be analyzed. This paper presents the application of XAI techniques to the DRL agents for MPPT. Two distinct DRL agents were developed, one with and one without the information of the converter's duty cycle, using the Deep Deterministic Policy Gradient (DDPG) algorithm and analyzed using XAI techniques, namely Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). The results reveal that the converter's input power is the most crucial information for the DRL agents when the converter is operating away from the maximum power point. When the converter approaches operation at the maximum power point, the DRL agents are significantly dependent on the power differential of the converter across time. If the information about the converter's duty cycle is available, the DRL agents are significantly reliant on the converter's duty cycle and disregard other observations for decision-making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call