Abstract
Interpretable machine learning has recently attracted a lot of interest in the community. Currently, it mainly focuses on models trained on non-time series data. LIME and SHAP are well-known examples and provide visual-explanations of feature contributions to model decisions on an instance basis. Other post-hoc approaches, such as attribute-wise interpretations, also focus on tabular data only. Little research has been done so far on the interpretability of predictive models trained on time series data. Therefore, this work focuses on explaining decisions made by black-box models such as Deep Neural Networks trained on sensor data. In this paper, we present the results of a qualitative study, in which we systematically compare the types of explanations and the properties (e.g., method, computational complexity) of existing interpretability approaches for models trained on the PHM08-CMAPSS dataset. We compare shallow models such as regression trees (with limited depth) and black-box models such as Long-Short Term Memories (LSTMs) and Support Vector Regression (SVR). We train models on processed sensor data and explain their output using LIME, SHAP, and attribute-wise methods. Throughout our experiments, we point out the advantages and disadvantages of using these approaches for interpreting models trained on time series data. Our investigation results can serve as a guideline for selecting a suitable explainability method for black-box predictive models trained on time-series data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.