In engineering, prognostics can be defined as the estimation of the remaining useful life of a system given current and past health conditions. This field has drawn attention from research, industry, and government as this kind of technology can help improve efficiency and lower the costs of maintenance in a variety of technical applications. An approach to prognostics that has gained increasing attention is the use of data-driven methods. These methods typically use pattern recognition and machine learning to estimate the residual life of equipment based on historical data. Despite their promising results, a major disadvantage is that it is difficult to interpret this kind of methodologies, that is, to understand why a certain prediction of remaining useful life was made at a certain point in time. Nevertheless, the interpretability of these models could facilitate the use of data-driven prognostics in different domains such as aeronautics, manufacturing, and energy, areas where certification is critical. To help address this issue, we use Local Interpretable Model-agnostic Explanations (LIME) from the field of eXplainable Artificial Intelligence (XAI) to analyze the prognostics of a Gated Recurrent Unit (GRU) on the C-MAPSS data. We select the GRU as this is a deep learning model that a) has an explicit temporal dimension and b) has shown promising results in the field of prognostics and c) is of simplified nature compared to other recurrent networks. Our results suggest that it is possible to infer the feature importance for the GRU both globally (for the entire model) and locally (for a given RUL prediction) with LIME.
Read full abstract