Abstract
The development of interpretable deep learning methods has garnered attention in fault diagnosis. However, the effectiveness of various interpretation methods based on different principles within deep learning fault diagnosis models needs validation across different seasons for air handling units (AHU). Additionally, a comprehensive metric for systematically evaluating and quantitatively comparing the interpretability performance of these methods is lacking. This study developed convolutional neural network models for AHU fault diagnosis during summer, winter, and spring, validated using ASHRAE RP-1312 fault data. Five feature-level interpretation methods – activation maximization, occlusion sensitivity, Shapley additive explanations (SHAP), gradient-weighted class activation mapping, and layer-wise relevance propagation (LRP) – were used to interpret the models’ diagnostic decision-making process. The results of these methods were normalized, and diagnostic criteria were established using a unified threshold. A composite performance metric (CPM) was created to assess and compare the methods. The results indicate that SHAP and LRP outperformed other methods, with average CPMs of 91.42% and 89.62%, respectively. This research offers a systematic comparison of different interpretation methods in deep learning models for AHU fault diagnosis, providing a reference for applying interpretable deep learning models.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have