Abstract

Clinical decision support (CDS) systems are computer applications whose goal is to facilitate the decision-making process of clinicians. In recent years, CDSS has developed an interest in applying machine learning (ML) models to make predictions related to clinical outcomes. The limited interpretability of many ML models is a major barrier to clinical adoption. This challenge has sparked research interest in interpretable and explainable AI, commonly known as XAI. XAI methods are used to construct and communicate explanations of the predictions made by machine learning models so that end users can interpret those predictions. However, these methods are not designed based on end-users' needs; rather, they are based on the developers’ intuitions of what a good explanation is. Furthermore, XAI methods are not tailored to the specific tasks that a user will undertake, nor are they tailored to the interface used to perform those tasks. To tackle these issues, we propose to develop a visual analytic tool to explain an ML model for clinical applications whose design will explicitly take into account the context of tasks and the needs of end-users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call