Abstract

Process monitoring is crucial to ensure operational reliability and to prevent industrial accidents. Data-driven methods have become the preferred approach for fault detection and diagnosis. Specifically, deep learning algorithms such as Deep Neural Networks (DNNs) show good potential even in complex processes. A key shortcoming of DNNs is the difficulty in interpreting their classification result. Emerging approaches from explainable Artificial Intelligence (XAI) seek to address this shortcoming. This paper proposes a method based on the Shapley value framework and its implementation using integrated gradients to identify those variables which lead a DNN to classify an input as a fault. The method estimates the marginal contribution of each variable to the DNN, averaged over the path from the baseline (in this case, the process’ normal state) to the current sample. We illustrate the resulting variable attribution using a numerical example and the benchmark Tennessee Eastman process. Our results show that the proposed methodology provides accurate, sample-specific explanations of the DNN’s prediction. These can be used by the offline model developer to improve the DNN if necessary. It can also be used by the plant operator in real-time to understand the black-box DNN’s predictions and decide on operational strategies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.