Abstract

Artificial intelligence (AI) models are increasingly ubiquitous in daily life, their unparalleled predictive and decision-making capabilities are being utilized for applications of all magnitudes, ranging from minor decisions to those with significant impacts on individuals and society. However, many of these models feature a multitude of redundant parameters, which render them incomprehensible to human understanding. The lack of transparency raises concerns about the reliability and fairness of the decisions made by AI models motivating a new field of research called eXplainable AI (XAI), which aims to elucidate complex AI model outcomes and develop tools to enable human understanding. Given the increasing impact of machine learning in the fields of data-driven estimation and control, it becomes crucial to integrate XAI tools with control theory to better comprehend the decisions made by AI models in estimation and control. In this article, we propose the use of an XAI method called Local Interpretable Model-Agnostic Explanations (LIME) to explain the mechanisms behind a black-box estimation algorithm processing time-series data. Moreover, we demonstrate that LIME can be used to identify a local linearized model that approximates the complex machine learning algorithm. We show that the identified local linearized model can shed light on the dynamic of the model that generated the training data.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.