Abstract

Deep Learning (DL) models are becoming the preferred approach for process monitoring due to their higher prediction accuracy; however, they are still viewed as black boxes. Explainable Artificial Intelligence (XAI) methods seek to address this shortcoming by providing explanations that are either global (explains the entire DL model) or local (explains the result for each individual sample). Due to nonlinearities and other complexities inherent in chemical processes, providing a local explanation is more suitable. This paper proposes a local XAI method that explains process monitoring results from a deep neural network (DNN) based on process alarms. The effectiveness of the proposed method is demonstrated using a CSTR case study and the Tennessee-Eastman benchmark process. Our results show that the explanations proffered by the proposed method assist operators in understanding DNN's prediction during online process monitoring. Additionally, during model development, it can offer insights that enable improvement of the DNN's architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.