Abstract

Free-form text-based maintenance and service records related to industrial assets capture the observations and actions of service engineers and are a crucial resource for assessing system-level asset health. To facilitate tracking of historical asset health issues, these records are categorized using tags from a predefined taxonomy, which is mostly a manual and time-consuming process. Given that these records can offer valuable information in troubleshooting maintenance issues, automating this process through deep learning (DL) based natural language processing (NLP) models can offer significant operational and maintenance (O&M) cost reductions. However, these data-based models are not expected to be fully accurate, requiring human experts to regularly review all predictions by DL models to verify or correct them, which is also an highly inefficient and costly process. On the other hand, new records that have novel or ambiguous context can be more appropriately resolved by a human expert. The objective of the work described in this paper is to create an interpretable mechanism that can assess reliability of individual predictions from DL-based maintenance record classifiers and help design a mixed initiative system. This system aims to identify scenarios where predictions are reliable enough for automated decision versus where human intervention is needed due to poor reliability. Additionally, this system aides decision support by providing exemplars from training set that can enhance the human tagger’s productivity and quality. Given a set of tagged records, it also has the capability to identify instances where the originally assigned tags are likely to be inaccurate/noisy. We illustrate these outcomes through tagging of maintenance records from the aviation domain, leading to improvements over only human-based or only DL-based tag assignments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call