Abstract

Artificial intelligence (AI) and Machine learning (ML) are increasingly used for digital twin development in energy and engineering systems, but these models must be fair, unbiased, interpretable, and explainable. It is critical to have confidence in AI’s trustworthiness. ML techniques have been useful in predicting important parameters and improving model performance. However, for these AI techniques to be useful in making decisions, they need to be audited, accounted for, and easy to understand. Therefore, the use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics, such as remaining useful life (RUL), in a digital twin system to make it intelligent while ensuring that the AI model is transparent in its decision-making processes and that the predictions it generates can be understood and trusted by users. By using an explainable, interpretable, and trustworthy AI, intelligent digital twin systems can make more accurate predictions of RUL, leading to better maintenance and repair planning and, ultimately, improved system performance. This paper aims to explain the ideas of XAI and IML and justify the important role of AI/ML for the digital twin components, which requires XAI to understand the prediction better. This paper explains the importance and fundamentals of XAI and IML in both local and global aspects in terms of feature selection, model interpretability, and model diagnosis and validation to ensure the reliable use of trustworthy AI/ML applications for RUL prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call