Abstract

Adoption of artificial intelligence (AI) is causing a paradigm change in many fields. Its practical utilization, however, especially in safety-critical applications like medicine, remains limited, mainly due to the black-box nature of most advanced AI models, which creates difficulties understanding why and how a model produces a particular output or decision. To overcome this issue, various methods and techniques have been proposed within the emerging field of explainable artificial intelligence (XAI). In this paper, we introduce a user-centric and interactive framework that enables a holistic understanding of AI systems. The proposed framework is designed to aid the development of more explainable AI systems by promoting transparency and trust in their use and allow different stakeholders to better understand and evaluate AI decisions. To illustrate the effectiveness of the framework, we implement a case study of an AI system analyzing optical coherence tomography (OCT) images. The development of this example case is reported using the proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call