Abstract

AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by the inability to fully trust an AI system that it will not harm a human. Besides, fairness, privacy, transparency, and explainability are vital to developing trust in AI systems. As stated in Describing Trustworthy AI,aa.https://www.ibm.com/watson/trustworthy-ai. “Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand.” The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations directly related to decision-making, similar to how a domain expert makes decisions based on “domain knowledge,” including well-established, peer-validated explicit guidelines. To understand and validate an AI system's outcomes (such as classification, recommendations, predictions) that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use. Contemporary XAI methods are yet addressed explanations that enable decision-making similar to an expert. Figure 1 shows the stages of adoption of an AI system into the real world.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.