Explainable AI for Clinical Decision Support Systems: Literature Review, Key Gaps, and Research Synthesis
While Artificial Intelligence (AI) promises significant enhancements for Clinical Decision Support Systems (CDSSs), the opacity of many AI models remains a major barrier to clinical adoption, primarily due to interpretability and trust challenges. Explainable AI (XAI) seeks to bridge this gap by making model reasoning understandable to clinicians, but technical XAI solutions have too often failed to address real-world clinician needs, workflow integration, and usability concerns. This study synthesizes persistent challenges in applying XAI to CDSS—including mismatched explanation methods, suboptimal interface designs, and insufficient evaluation practices—and proposes a structured, user-centered framework to guide more effective and trustworthy XAI-CDSS development. Drawing on a comprehensive literature review, we detail a three-phase framework encompassing user-centered XAI method selection, interface co-design, and iterative evaluation and refinement. We demonstrate its application through a retrospective case study analysis of a published XAI-CDSS for sepsis care. Our synthesis highlights the importance of aligning XAI with clinical workflows, supporting calibrated trust, and deploying robust evaluation methodologies that capture real-world clinician–AI interaction patterns, such as negotiation. The case analysis shows how the framework can systematically identify and address user-centric gaps, leading to better workflow integration, tailored explanations, and more usable interfaces. We conclude that achieving trustworthy and clinically useful XAI-CDSS requires a fundamentally user-centered approach; our framework offers actionable guidance for creating explainable, usable, and trusted AI systems in healthcare.
- Research Article
390
- 10.3390/app11115088
- May 31, 2021
- Applied Sciences
Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.
- Research Article
- 10.2196/64266
- Mar 26, 2025
- JMIR formative research
Artificial intelligence (AI)-based systems in medicine like clinical decision support systems (CDSSs) have shown promising results in health care, sometimes outperforming human specialists. However, the integration of AI may challenge medical professionals' identities and lead to limited trust in technology, resulting in health care professionals rejecting AI-based systems. This study aims to explore the impact of AI process design features on physicians' trust in the AI solution and on perceived threats to their professional identity. These design features involve the explainability of AI-based CDSS decision outcomes, the integration depth of the AI-generated advice into the clinical workflow, and the physician's accountability for the AI system-induced medical decisions. We conducted a 3-factorial web-based between-subject scenario-based experiment with 292 medical students in their medical training and experienced physicians across different specialties. The participants were presented with an AI-based CDSS for sepsis prediction and prevention for use in a hospital. Each participant was given a scenario in which the 3 design features of the AI-based CDSS were manipulated in a 2×2×2 factorial design. SPSS PROCESS (IBM Corp) macro was used for hypothesis testing. The results suggest that the explainability of the AI-based CDSS was positively associated with both trust in the AI system (β=.508; P<.001) and professional identity threat perceptions (β=.351; P=.02). Trust in the AI system was found to be negatively related to professional identity threat perceptions (β=-.138; P=.047), indicating a partially mediated effect on professional identity threat through trust. Deep integration of AI-generated advice into the clinical workflow was positively associated with trust in the system (β=.262; P=.009). The accountability of the AI-based decisions, that is, the system required a signature, was found to be positively associated with professional identity threat perceptions among the respondents (β=.339; P=.004). Our research highlights the role of process design features of AI systems used in medicine in shaping professional identity perceptions, mediated through increased trust in AI. An explainable AI-based CDSS and an AI-generated system advice, which is deeply integrated into the clinical workflow, reinforce trust, thereby mitigating perceived professional identity threats. However, explainable AI and individual accountability of the system directly exacerbate threat perceptions. Our findings illustrate the complex nature of the behavioral patterns of AI in health care and have broader implications for supporting the implementation of AI-based CDSSs in a context where AI systems may impact professional identity.
- Research Article
- 10.12788/fp.0589
- May 1, 2025
- Federal practitioner : for the health care professionals of the VA, DoD, and PHS
Limited staff, rising costs, and regulatory oversight, coupled with the need to achieve clinical endpoints and improve access to care, has made scaling health care operations challenging. This article explores the emerging paradigm of multiagent artificial intelligence (AI) systems in health care, which represent a significant leap beyond traditional large language models. This analysis reviews the potential of multiagent AI systems to revolutionize patient care, streamline administrative processes, and support complex clinical decision-making. It describes a hypothetical sepsis management system comprising 7 specialized AI agents, with each agent handling specific aspects of patient care from data collection and diagnosis to treatment recommendations and resource management. Additional applications in chronic disease management and hospital patient flow optimization are also examined. The technical implementation of these systems is discussed, including the use of advanced large language models, interagent quality control measures, guardrail implementation, self-reflection mechanisms, integration with electronic health records, and the importance of explainable AI in ensuring decision transparency. Potential benefits include enhanced diagnostic accuracy and personalized treatment plans. Challenges remain related to data quality assurance, workflow integration, and ethical considerations. Future directions for AI include the integration of internet-enabled devices and the development of more sophisticated natural language interfaces. This article underscores the transformative potential of multiagent AI systems in health care while emphasizing the importance of rigorous validation, ethical oversight, and a patient-centered approach in their development and implementation.
- Research Article
- 10.63682/jns.v14i31s.8739
- May 31, 2025
- Journal of Neonatal Surgery
Background: Artificial intelligence (AI) is gradually transforming neonatal and pediatric intensive care units (NICUs and PICUs) by enhancing diagnostic accuracy, risk evaluation, and clinical decision support. However, integrating AI into these vital care settings faces challenges related to data limitations, clinician acceptance, and socioeconomic disparities. Objective: This review examines the clinical potential of AI especially machine learning (ML) and deep learning (DL) in NICUs and PICUs, while evaluating the socioeconomic factors that influence AI deployment, effectiveness, and equity. Methods: A comprehensive literature review was conducted, focusing on applications of AI in early diagnosis, patient surveillance, imaging assessment, and transport logistics in neonatal and pediatric ICUs. Factors related to socioeconomic status affecting AI deployment, such as provider demographics, healthcare systems, and geographic inequalities, were examined. Findings: AI models show better early identification of urgent conditions like sepsis and respiratory distress, streamline clinical processes, and improve resource management. Nevertheless, differences in access to AI and its performance are present, especially in low-resource environments because of inadequate infrastructure, biased data, and differing levels of clinician preparedness. Approaches like federated learning and explainable AI could address certain challenges
- Discussion
6
- 10.1016/j.ejmp.2021.05.008
- Mar 1, 2021
- Physica Medica
Focus issue: Artificial intelligence in medical physics.
- Research Article
- 10.1080/10447318.2025.2539458
- Aug 7, 2025
- International Journal of Human–Computer Interaction
Artificial Intelligence (AI) has demonstrated potential in healthcare, particularly in enhancing diagnostic accuracy and decision-making through Clinical Decision Support Systems (CDSSs). However, the successful implementation of these systems relies on user trust and reliance, which can be influenced by explainable AI. This study explores the impact of varying explainability levels on clinicians’ trust, cognitive load, and diagnostic performance in breast cancer detection. Utilizing an interrupted time series design, we conducted a web-based experiment involving 28 healthcare professionals. The results revealed that high confidence scores substantially increased trust but also led to overreliance, reducing diagnostic accuracy. In contrast, low confidence scores decreased trust and agreement while increasing diagnosis duration, reflecting more cautious behavior. Some explainability features influenced cognitive load by increasing stress levels. Additionally, demographic factors such as age, gender, and professional role shaped participants’ perceptions and interactions with the system. This study provides valuable insights into how explainability impact clinicians’ behavior and decision-making. The findings highlight the importance of designing AI-driven CDSSs that balance transparency, usability, and cognitive demands to foster trust and improve integration into clinical workflows.
- Research Article
3
- 10.1016/j.techsoc.2024.102736
- Oct 16, 2024
- Technology in Society
Technology readiness assessment: Case of clinical decision support systems in healthcare
- Book Chapter
3
- 10.1016/b978-0-443-19096-4.00006-7
- Aug 25, 2023
- Emotional AI and Human-AI Interactions in Social Networking
Chapter Twelve - Human AI: Explainable and responsible models in computer vision
- Supplementary Content
- 10.2196/63733
- Jun 20, 2025
- Journal of Medical Internet Research
BackgroundClinical decision support systems (CDSS) have the potential to play a crucial role in enhancing health care quality by providing evidence-based information to clinicians at the point of care. Despite their increasing popularity, there is a lack of comprehensive research exploring their design characterization and trends. This limits our understanding and ability to optimize their functionality, usability, and adoption in health care settings.ObjectiveThis systematic review examined the design characteristics of CDSS from a user-centered perspective, focusing on user-centered design (UCD), user experience (UX), and usability, to identify related design challenges and provide insights into the implications for future design of CDSS.MethodsThis review followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) recommendations and used a grounded theory analytical approach to guide the conduct, data analysis, and synthesis. A search of 4 major electronic databases (PubMed, Web of Science, Scopus, and IEEE Xplore) was conducted for papers published between 2013 and 2023, using predefined design-focused keywords (design, UX, implementation, evaluation, usability, and architecture). Papers were included if they focused on a designed CDSS for a health condition and discussed the design and UX aspects (eg, design approach, architecture, or integration). Papers were excluded if they solely covered technical implementation or architecture (eg, machine learning methods) or were editorials, reviews, books, conference abstracts, or study protocols.ResultsOut of 1905 initially identified papers, 40 passed screening and eligibility checks for a full review and analysis. Analysis of the studies revealed that UCD is the most widely adopted approach for designing CDSS, with all design processes incorporating functional or usability evaluation mechanisms. The CDSS reported were mainly clinician-facing and mostly stand-alone systems, with their design lacking consideration for integration with existing clinical information systems and workflows. Through a UCD lens, four key categories of challenges relevant to CDSS design were identified: (1) usability and UX, (2) validity and reliability, (3) data quality and assurance, and (4) design and integration complexities. Notably, a subset of studies incorporating Explainable artificial intelligence highlighted its emerging role in addressing key challenges related to validity and reliability by fostering explainability, transparency, and trust in CDSS recommendations, while also supporting collaborative validation with users.ConclusionsWhile CDSS show promise in enhancing health care delivery, identified challenges have implications for their future design, efficacy, and utilization. Adopting pragmatic UCD design approaches that actively involve users is essential for enhancing usability and addressing identified UX challenges. Integrating with clinical systems is crucial for interoperability and presents opportunities for AI-enabled CDSS that rely on large patient data. Incorporating emerging technologies such as Explainable Artificial Intelligence can boost trust and acceptance. Enabling functionality for CDSS to support both clinicians and patients can create opportunities for effective use in virtual care.
- Research Article
- 10.18231/j.aprd.2024.059
- Dec 15, 2024
- IP Annals of Prosthodontics and Restorative Dentistry
With the evolution of Artificial Intelligence (AI), even cancer care approaches are evolving as it is providing innovative solutions to some of the most complex challenges in oncology. This article delves into how AI is making a profound impact across the cancer care spectrum worldwide, from early detection and precise diagnosis to the personalization of treatment and improved patient management. By harnessing AI's ability to analyze massive datasets and identify patterns beyond human perception, healthcare professionals can offer more accurate diagnoses and more effective treatments tailored to individual patient needs. This review also highlights the most recent advancements in AI-driven technologies in oncology and looks toward the future, where AI's role is expected to expand further. By discussing the potential and challenges of AI in cancer care, this article offers insights into how it is reshaping oncology practice, with the ultimate goal of enhancing patient outcomes and revolutionizing cancer treatment. This article aims to explore the transformative role of Artificial Intelligence (AI) in oncology, focusing on its impact on early cancer detection, precise diagnosis, personalized treatment, and overall patient management. It seeks to provide insights into the recent advancements of AI in cancer care, the challenges associated with its integration, and the potential future directions in oncology.A comprehensive review of literature was conducted, focusing on AI applications in oncology, including diagnostic imaging, precision oncology, and clinical decision support systems. Recent studies were analyzed to understand the role of AI-driven technologies in cancer diagnosis, treatment, and management. Inclusion criteria: Peer-reviewed articles, case studies, and reviews published in the last five years that focus on the application of AI in oncology, including early cancer detection, diagnostic accuracy, personalized treatment, and clinical decision support systems. Exclusion criteria: Articles that did not focus on oncology, did not involve AI technologies, or were not peer-reviewed were excluded from this review. AI has shown significant improvements in cancer detection and diagnostic accuracy, particularly through advanced imaging techniques and personalized treatment strategies. AI-powered diagnostic tools have revolutionized imaging by enhancing detection rates and reducing diagnostic errors. Moreover, AI has played a crucial role in tailoring therapeutic interventions based on individual patient characteristics, thus contributing to precision oncology. AI is revolutionizing cancer care by improving diagnostic precision, personalizing treatments, and enhancing patient outcomes. However, challenges such as data privacy, algorithm bias, and regulatory complexities must be addressed. Future innovations in AI, along with collaborative efforts, will further enhance cancer care and pave the way for AI-driven oncology practices globally.
- Research Article
85
- 10.1002/mp.15359
- Dec 7, 2021
- Medical physics
The development of medical imaging artificial intelligence (AI) systems for evaluating COVID‐19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID‐19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life‐or‐death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID‐19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID‐19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID‐19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.
- Research Article
17
- 10.1016/j.ogla.2020.08.006
- Aug 15, 2020
- Ophthalmology. Glaucoma
Special Commentary: Using Clinical Decision Support Systems to Bring Predictive Models to the Glaucoma Clinic
- Research Article
180
- 10.1016/j.ijmedinf.2012.02.009
- Mar 27, 2012
- International Journal of Medical Informatics
Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support
- Book Chapter
4
- 10.1201/9781003097204-2
- Jul 7, 2021
Artificial intelligence (AI) is playing a significant role in revolutionizing health-care industry. With the rapid increase in availability of new clinical data sources and with the evolution of new AI-based technologies, its clinical applications are significantly increasing. AI in medicine and medical domain is helping administrators, practitioners, patients and other stakeholders by emulating the intelligent behavior of human in computers and in other specialized machines that are being used in many promising medical care applications. The availability of large standard datasets is crucial for the training of these AI-based systems. A variety of medical data sources specifically used in diagnosis process exist such as (1) numeric and textual data – that basically includes patient's attributes, i.e. gender, age, clinical history, disease symptoms and physical examination results, that are mostly used for risk prediction of particular disease, textual reports, i.e. physical examination outcomes, operative notes, laboratory reports and discharge summaries; (2) images data – that encompasses screened images obtained from different modalities such as radiology images (i.e. X-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI)), dermoscopy images, fundus and eye screening images, pathology images and many more; (3) sound data – such as ultrasound (US), heart sound signals that are used for early diagnoses of disease and analysis of internal human body parts; (4) genetic data – used for the diagnosis of several complex diseases such as cancers, Down's syndrome and infectious disease. The successful applications of AI in healthcare attracted many small and large companies to invest in this domain. Such companies include Amazon, Microsoft, Google and Nvidia. Broad categories of AI methods are being employed by researchers to provide better solutions in medical domain. These methods mainly include (1) knowledge-based expert systems (ES), which are primarily based upon a series of if-then rules defined by the domain experts, specifically utilized in clinical decision support systems; (2) machine learning (ML), in which statistical models are trained over different clinical datasets to perform tasks like disease detection, treatment planning and disease prediction; (3) deep learning (DL), which is state-of-the-art auto learning mechanism based on multilayer perceptron and their enhanced forms that can assists in risk prediction, prognosis, detection and diagnosis of different disease etc. by recognizing patterns in training data. There are numerous application areas where AI methods can be used to improve the performance of systems and clinical results, which may include (1) computer vision, which assists in detection and diagnosis of disease and in monitoring of patients using medical images like X-rays, MRIs and US; (2) natural language processing (NLP), which assists in creation and interpretation of patient's medical reports using structured and unstructured medical and contextual data; (3) wearable devices, which help in patient's monitoring (i.e. observe patient's health condition) by recording different biomedical signals (i.e. blood pressure, heartbeat); and (4) virtual assistants, which are autonomous entities that can formulate their decisions based on their interactions with their environment and their self-learning mechanism. These are the systems that can operate in the absence of human intelligence and may range from simple systems like thermostat to complex networked systems like army of robots. In addition to these, AI also assists in the production of new medicines, in prediction of drug-drug interaction, and helps patients using virtual assistants or chatbots, backings doctors in patient monitoring using Web-based or cloud-based systems, contributing in administrative tasks etc. Among all of these wonderful applications of AI, automated disease diagnosis is the major and the most successful domain. Providing the right and optimal diagnosis at the right time is an important aspect of medical care industry. Normally, radiologists and other clinicians perform manual scrutiny of these screened images to find different abnormalities, which requires a large amount of time and may prone to subjectivity. Moreover, the chance of errors could be further increased due to lack of expert's experience, visual fatigue, physical health issues of professionals etc. One of the potential solutions to this problem is intervention of computerized AI-based systems in healthcare to reduce these medication errors. This chapter presents an extensive study of some AI technologies and their applications in the detection, diagnosis, disease treatment, prediction and prescription of different human diseases. Our main goal here is to draw a bigger picture and establish the context for the remainder of this book. The chapter is structured as follows: we will first explore different types of medical data and their use in AI-based healthcare. After that, we will introduce some AI technologies and its applications used in medical care systems. We will then learn how these AI technologies and medical data are being used by AI systems for detection, prediction and diagnosis of different types of disease. At the end of this chapter, we will cover benefits and challenges of using those AI systems in medical care.
- Research Article
41
- 10.1016/j.fertnstert.2020.10.040
- Nov 1, 2020
- Fertility and Sterility
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.