Abstract
Currently, machine learning (ML) methods are widely adopted in structural health monitoring (SHM), yet they are still mostly black boxes. On the other hand, given the significant responsibility associated with SHM, understanding the rationale behind it is critically important. In some cases, even experienced experts have difficulties finding evidence related to structural integrity within complex structural signals. Thus, solely relying on these black-box SHM systems carries inherent risks. Trustworthiness is the key for decision-makers when planning to act on predictions or deciding whether to deploy a new model. This understanding can also offer insights about the models, transforming untrustworthy models or predictions into reliable ones. The indirect SHM method using passing vehicles, an emerging technique in the past two decades, offers a rapid and cost-effective solution for bridge monitoring. Its signal components are affected by factors such as vehicle dynamics and road roughness, making them more complex than those in the direct method. Although ML methods have shown promising results in this domain, their results require further explanation. In this work, SHAP (SHapley Additive exPlanation) tools are utilized to interpret the result prediction of ML methods in indirect SHM. The trustworthiness of models is demonstrated through simulation databases: deciding whether a prediction should be trusted, choosing between models, and determining why a classifier should not be trusted.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have