Abstract

Deep Learning (DL) models are becoming the preferred approach for process monitoring due to their higher prediction accuracy; however, they are still viewed as black boxes. Explainable Artificial Intelligence (XAI) methods seek to address this shortcoming by providing explanations that are either global (explains the entire DL model) or local (explains the result for each individual sample). Due to nonlinearities and other complexities inherent in chemical processes, providing a local explanation is more suitable. This paper proposes a local XAI method that explains process monitoring results from a deep neural network (DNN) based on process alarms. The effectiveness of the proposed method is demonstrated using a CSTR case study and the Tennessee-Eastman benchmark process. Our results show that the explanations proffered by the proposed method assist operators in understanding DNN's prediction during online process monitoring. Additionally, during model development, it can offer insights that enable improvement of the DNN's architecture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call