Explainable AI in healthcare: to explain, to predict, or to describe?

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Explainable Artificial Intelligence (AI) methods are designed to provide information about how AI-based models make predictions. In healthcare, there is a widespread expectation that these methods will provide relevant and accurate information about a model’s inner-workings to different stakeholders (ranging from patients and healthcare providers to AI and medical guideline developers). This is a challenging endeavor since what qualifies as relevant information may differ greatly depending on the stakeholder. For many stakeholders, relevant explanations are causal in nature, yet, explainable AI methods are often not able to deliver this information. Using the Describe-Predict-Explain framework, we argue that Explainable AI methods are good descriptive tools, as they may help to describe how a model works but are limited in their ability to explain why a model works in terms of true underlying biological mechanisms and cause-and-effect relations. This limits the suitability of explainable AI methods to provide actionable advice to patients or to judge the face validity of AI-based models.

Similar Papers
  • Research Article
  • Cite Count Icon 8
  • 10.1001/jamanetworkopen.2025.14452
Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients
  • Jun 10, 2025
  • JAMA Network Open
  • Lena Hoffmann + 99 more

The successful implementation of artificial intelligence (AI) in health care depends on its acceptance by key stakeholders, particularly patients, who are the primary beneficiaries of AI-driven outcomes. To survey hospital patients to investigate their trust, concerns, and preferences toward the use of AI in health care and diagnostics and to assess the sociodemographic factors associated with patient attitudes. This cross-sectional study developed and implemented an anonymous quantitative survey between February 1 and November 1, 2023, using a nonprobability sample at 74 hospitals in 43 countries. Participants included hospital patients 18 years of age or older who agreed with voluntary participation in the survey presented in 1 of 26 languages. Information sheets and paper surveys handed out by hospital staff and posted in conspicuous hospital locations. The primary outcome was participant responses to a 26-item instrument containing a general data section (8 items) and 3 dimensions (trust in AI, AI and diagnosis, preferences and concerns toward AI) with 6 items each. Subgroup analyses used cumulative link mixed and binary mixed-effects models. In total, 13 806 patients participated, including 8951 (64.8%) in the Global North and 4855 (35.2%) in the Global South. Their median (IQR) age was 48 (34-62) years, and 6973 (50.5%) were male. The survey results indicated a predominantly favorable general view of AI in health care, with 57.6% of respondents (7775 of 13 502) expressing a positive attitude. However, attitudes exhibited notable variation based on demographic characteristics, health status, and technological literacy. Female respondents (3511 of 6318 [55.6%]) exhibited fewer positive attitudes toward AI use in medicine than male respondents (4057 of 6864 [59.1%]), and participants with poorer health status exhibited fewer positive attitudes toward AI use in medicine (eg, 58 of 199 [29.2%] with rather negative views) than patients with very good health (eg, 134 of 2538 [5.3%] with rather negative views). Conversely, higher levels of AI knowledge and frequent use of technology devices were associated with more positive attitudes. Notably, fewer than half of the participants expressed positive attitudes regarding all items pertaining to trust in AI. The lowest level of trust was observed for the accuracy of AI in providing information regarding treatment responses (5637 of 13 480 respondents [41.8%] trusted AI). Patients preferred explainable AI (8816 of 12 563 [70.2%]) and physician-led decision-making (9222 of 12 652 [72.9%]), even if it meant slightly compromised accuracy. In this cross-sectional study of patient attitudes toward AI use in health care across 6 continents, findings indicated that tailored AI implementation strategies should take patient demographics, health status, and preferences for explainable AI and physician oversight into account.

  • Research Article
  • Cite Count Icon 1
  • 10.59022/ujldp.63
Legal Application of Artificial Intelligence in Healthcare
  • Feb 28, 2023
  • Uzbek Journal of Law and Digital Policy
  • Ekaterina Kan

The integration of artificial intelligence (AI) in healthcare has the potential to revolutionize the industry by improving patient outcomes and increasing efficiency. However, the rapid development and implementation of AI technologies raise complex legal issues and challenges. This article explores the key legal aspects of AI integration in healthcare, including data privacy and security, liability and accountability, intellectual property, and regulatory compliance. It examines relevant international and national legal instruments, regulations, and guidelines, as well as industry-specific standards that apply to AI in healthcare. The study also analyzes case studies and practical applications to highlight legal challenges and resolutions, lessons learned, and best practices. The discussion addresses the implications of the results, comparing the legal landscape for AI in healthcare to other industries and countries and highlighting potential future legal developments and challenges. The conclusion summarizes key findings, offers recommendations for integrating AI in healthcare systems while addressing legal concerns, and proposes future directions for legal research and policy development in the context of AI and healthcare. This comprehensive analysis aims to inform healthcare providers, AI developers, and policymakers on the legal landscape surrounding AI in healthcare, providing valuable insights to navigate this complex domain and harness the potential of AI to transform healthcare delivery.

  • Research Article
  • Cite Count Icon 5
  • 10.1089/bio.2023.29121.editorial
Readiness for Artificial Intelligence in Biobanking
  • Apr 1, 2023
  • Biopreservation and Biobanking
  • Gregory H Grossman + 1 more

Readiness for Artificial Intelligence in Biobanking

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 234
  • 10.1016/s2589-7500(21)00132-1
Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review
  • Aug 23, 2021
  • The Lancet Digital Health
  • Albert T Young + 3 more

Artificial intelligence (AI) promises to change health care, with some studies showing proof of concept of a provider-level performance in various medical specialties. However, there are many barriers to implementing AI, including patient acceptance and understanding of AI. Patients' attitudes toward AI are not well understood. We systematically reviewed the literature on patient and general public attitudes toward clinical AI (either hypothetical or realised), including quantitative, qualitative, and mixed methods original research articles. We searched biomedical and computational databases from Jan 1, 2000, to Sept 28, 2020, and screened 2590 articles, 23 of which met our inclusion criteria. Studies were heterogeneous regarding the study population, study design, and the field and type of AI under study. Six (26%) studies assessed currently available or soon-to-be available AI tools, whereas 17 (74%) assessed hypothetical or broadly defined AI. The quality of the methods of these studies was mixed, with a frequent issue of selection bias. Overall, patients and the general public conveyed positive attitudes toward AI but had many reservations and preferred human supervision. We summarise our findings in six themes: AI concept, AI acceptability, AI relationship with humans, AI development and implementation, AI strengths and benefits, and AI weaknesses and risks. We suggest guidance for future studies, with the goal of supporting the safe, equitable, and patient-centred implementation of clinical AI.

  • PDF Download Icon
  • Front Matter
  • Cite Count Icon 4
  • 10.1016/s2589-7500(22)00068-1
Holding artificial intelligence to account
  • Apr 5, 2022
  • The Lancet Digital Health
  • The Lancet Digital Health

In this issue of The Lancet Digital Health, Xiaoxuan Liu and colleagues give their perspective on global auditing of medical artificial intelligence (AI). They call for the focus to shift from demonstrating the strengths of AI in health care to proactively discovering its weaknesses. Machines make unpredictable mistakes in medicine, which differ significantly from those made by humans. Liu and colleagues state that errors made by AI tools can have far-reaching consequences because of the complex and opaque relationships between the analysis and the clinical output. Given that there is little human control over how an AI generates results and that clinical knowledge is not a prerequisite in AI development, there is a risk of an AI learning spurious correlations that seem valid during training but are unreliable when applied to real-world situations. Lauren Oakden-Rayner and colleagues analysed the performance of an AI across a range of relevant features for hip fracture detection. This preclinical algorithmic audit identified barriers to clinical use, including a decrease in sensitivity at the prespecified operating point. This study highlighted several “failure modes”, which is the propensity of an AI to fail recurrently in certain conditions. Oakden-Rayner told The Lancet Digital Health that their study showed that “the failure modes of AI systems can look bizarre from a human perspective. Take, for example, in the hip fracture audit (figure 5), the recognition that the AI missed an extremely displaced fracture … the sort of image even a lay person would recognise as completely abnormal.” These errors can drastically affect clinician and patient trust in AI. Another example demonstrating the need for auditing was highlighted last month in an investigation by STAT and the Massachusetts Institute of Technology, which found that an EPIC health algorithm used to predict sepsis risk in the USA deteriorated sharply in performance, from 0·73 AUC to 0·53 AUC, over 10 years. This deterioration over time was caused by changes in the hospital coding system, increased diversity and volume of patient data, and changes in operational behaviours of caregivers. There was little to no oversight of the AI tool once it hit the market, potentially causing harm to patients in hospital. Liu commented, “without the ability to observe and learn from algorithmic errors, the risk is that it will continue to happen and there's no accountability for any harm that results.” Auditing medical AI is essential; but whose responsibility is it to ensure that AI is safe to use? Some experts think that AI developers are responsible for providing guidance on managing their tools, including how and when to check the system's performance, and identifying vulnerabilities that might emerge after they are put into practice. Others argue that not all the responsibility lies with AI developers, and health providers must test AI models on other data to verify their utility and assess potential vulnerabilities. Liu says, “we need clinical teams to start playing an active role in algorithmic safety oversight. They are best placed to define what success and failure looks like for their health institution and their patient cohort.” There are three challenges to overcome to ensure AI auditing is successfully implemented. First, in practice, auditing will require professionals with clinical and technical expertise to investigate and prevent AI errors and to thoughtfully interrogate errors before and during real-world deployment. However, experts with computational and clinical skill sets are not yet commonplace. Health-care institutes, AI companies, and governments must invest in upskilling health-care workers so that these experts can become an integral part of the medical AI development process. Second, industry-wide standards for monitoring medical AI tools over time must be enforced by key regulatory bodies. Tools to identify when an algorithm becomes miscalibrated because of changes in data or environment are being developed by researchers, but these tools must be endorsed in a sustained and standardised way, led by regulators, health systems, and AI developers. Third, the main issue that can exacerbate errors in AI is the lack of transparency of the data, code, and parameters due to intellectual property concerns. Liu and colleagues emphasise that much of the benefit that software and data access would provide can be instead obtained through a web portal with the ability to test the model on new data and receive model outputs. Oakden-Rayner said, “AI developers have a responsibility to make auditing easier for clinicians, especially by providing clear details of how their system works and how it was built.” The medical algorithmic auditArtificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. We propose a medical algorithmic audit framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences. Full-Text PDF Open AccessValidation and algorithmic audit of a deep learning system for the detection of proximal femoral fractures in patients in the emergency department: a diagnostic accuracy studyThe model outperformed the radiologists tested and maintained performance on external validation, but showed several unexpected limitations during further testing. Thorough preclinical evaluation of artificial intelligence models, including algorithmic auditing, can reveal unexpected and potentially harmful behaviour even in high-performance artificial intelligence systems, which can inform future clinical testing and deployment decisions. Full-Text PDF Open Access

  • Research Article
  • Cite Count Icon 3
  • 10.2196/70179
Attitudes Toward AI Usage in Patient Health Care: Evidence From a Population Survey Vignette Experiment
  • May 27, 2025
  • Journal of Medical Internet Research
  • Simon Kühne + 3 more

BackgroundThe integration of artificial intelligence (AI) holds substantial potential to alter diagnostics and treatment in health care settings. However, public attitudes toward AI, including trust and risk perception, are key to its ethical and effective adoption. Despite growing interest, empirical research on the factors shaping public support for AI in health care (particularly in large-scale, representative contexts) remains limited.ObjectiveThis study aimed to investigate public attitudes toward AI in patient health care, focusing on how AI attributes (autonomy, costs, reliability, and transparency) shape perceptions of support, risk, and personalized care. In addition, it examines the moderating role of sociodemographic characteristics (gender, age, educational level, migration background, and subjective health status) in these evaluations. Our study offers novel insights into the relative importance of AI system characteristics for public attitudes and acceptance.MethodsWe conducted a factorial vignette experiment with a probability-based survey of 3030 participants from Germany’s general population. Respondents were presented with hypothetical scenarios involving AI applications in diagnosis and treatment in a hospital setting. Linear regression models assessed the relative influence of AI attributes on the dependent variables (support, risk perception, and personalized care), with additional subgroup analyses to explore heterogeneity by sociodemographic characteristics.ResultsMean values between 4.2 and 4.4 on a 1-7 scale indicate a generally neutral to slightly negative stance toward AI integration in terms of general support, risk perception, and personalized care expectations, with responses spanning the full scale from strong support to strong opposition. Among the 4 dimensions, reliability emerges as the most influential factor (percentage of explained variance [EV] of up to 10.5%). Respondents expect AI to not only prevent errors but also exceed current reliability standards while strongly disapproving of nontraceable systems (transparency is another important factor, percentage of EV of up to 4%). Costs and autonomy play a comparatively minor role (percentage of EVs of up to 1.5% and 1.3%), with preferences favoring collaborative AI systems over autonomous ones, and higher costs generally leading to rejection. Heterogeneity analysis reveals limited sociodemographic differences, with education and migration background influencing attitudes toward transparency and autonomy, and gender differences primarily affecting cost-related perceptions. Overall, attitudes do not substantially differ between AI applications in diagnosis versus treatment.ConclusionsOur study fills a critical research gap by identifying the key factors that shape public trust and acceptance of AI in health care, particularly reliability, transparency, and patient-centered approaches. Our findings provide evidence-based recommendations for policy makers, health care providers, and AI developers to enhance trust and accountability, key concerns often overlooked in system development and real-world applications. The study highlights the need for targeted policy and educational initiatives to support the responsible integration of AI in patient care.

  • Research Article
  • Cite Count Icon 18
  • 10.1186/s12909-024-06035-4
Global cross-sectional student survey on AI in medical, dental, and veterinary education and practice at 192 faculties
  • Sep 28, 2024
  • BMC Medical Education
  • Felix Busch + 99 more

BackgroundThe successful integration of artificial intelligence (AI) in healthcare depends on the global perspectives of all stakeholders. This study aims to answer the research question: What are the attitudes of medical, dental, and veterinary students towards AI in education and practice, and what are the regional differences in these perceptions?MethodsAn anonymous online survey was developed based on a literature review and expert panel discussions. The survey assessed students' AI knowledge, attitudes towards AI in healthcare, current state of AI education, and preferences for AI teaching. It consisted of 16 multiple-choice items, eight demographic queries, and one free-field comment section. Medical, dental, and veterinary students from various countries were invited to participate via faculty newsletters and courses. The survey measured technological literacy, AI knowledge, current state of AI education, preferences for AI teaching, and attitudes towards AI in healthcare using Likert scales. Data were analyzed using descriptive statistics, Mann–Whitney U-test, Kruskal–Wallis test, and Dunn-Bonferroni post hoc test.ResultsThe survey included 4313 medical, 205 dentistry, and 78 veterinary students from 192 faculties and 48 countries. Most participants were from Europe (51.1%), followed by North/South America (23.3%) and Asia (21.3%). Students reported positive attitudes towards AI in healthcare (median: 4, IQR: 3–4) and a desire for more AI teaching (median: 4, IQR: 4–5). However, they had limited AI knowledge (median: 2, IQR: 2–2), lack of AI courses (76.3%), and felt unprepared to use AI in their careers (median: 2, IQR: 1–3). Subgroup analyses revealed significant differences between the Global North and South (r = 0.025 to 0.185, all P < .001) and across continents (r = 0.301 to 0.531, all P < .001), with generally small effect sizes.ConclusionsThis large-scale international survey highlights medical, dental, and veterinary students' positive perceptions of AI in healthcare, their strong desire for AI education, and the current lack of AI teaching in medical curricula worldwide. The study identifies a need for integrating AI education into medical curricula, considering regional differences in perceptions and educational needs.Trial registrationNot applicable (no clinical trial).

  • Research Article
  • Cite Count Icon 3
  • 10.1093/medlaw/fwaf005
The fifty shades of black: about black box AI and explainability in healthcare.
  • Jan 4, 2025
  • Medical law review
  • Vera Lúcia Raposo

Artificial Intelligence (AI) is revolutionizing healthcare by enhancing patient care, diagnostics, workflows, and treatment personalization. The integration of AI in healthcare promises significant advancements and better patient outcomes. However, the lack of explainability in many AI models, known as 'black-box AI', raises concerns for patients, doctors, and developers. This issue, termed 'black box medicine', challenges the adoption of AI in healthcare. The demand for explainable AI has grown as AI systems become more complex. The absence of explanations in AI decisions, especially in critical situations like healthcare, has sparked debates and even suggestions to exclude black-box AI from healthcare provision. This article examines the impact and causes of unexplainable AI in healthcare, critically evaluates its performance, and proposes strategies to address this challenge.

  • Research Article
  • Cite Count Icon 26
  • 10.3389/fgene.2022.902542
"Democratizing" artificial intelligence in medicine and healthcare: Mapping the uses of an elusive term.
  • Aug 15, 2022
  • Frontiers in genetics
  • Giovanni Rubeis + 2 more

Introduction: “Democratizing” artificial intelligence (AI) in medicine and healthcare is a vague term that encompasses various meanings, issues, and visions. This article maps the ways this term is used in discourses on AI in medicine and healthcare and uses this map for a normative reflection on how to direct AI in medicine and healthcare towards desirable futures. Methods: We searched peer-reviewed articles from Scopus, Google Scholar, and PubMed along with grey literature using search terms “democrat*”, “artificial intelligence” and “machine learning”. We approached both as documents and analyzed them qualitatively, asking: What is the object of democratization? What should be democratized, and why? Who is the demos who is said to benefit from democratization? And what kind of theories of democracy are (tacitly) tied to specific uses of the term? Results: We identified four clusters of visions of democratizing AI in healthcare and medicine: 1) democratizing medicine and healthcare through AI, 2) multiplying the producers and users of AI, 3) enabling access to and oversight of data, and 4) making AI an object of democratic governance. Discussion: The envisioned democratization in most visions mainly focuses on patients as consumers and relies on or limits itself to free market-solutions. Democratization in this context requires defining and envisioning a set of social goods, and deliberative processes and modes of participation to ensure that those affected by AI in healthcare have a say on its development and use.

  • Research Article
  • Cite Count Icon 11
  • 10.59022/ijlp.203
Challenges and Opportunities for AI in Healthcare
  • Jul 30, 2024
  • International Journal of Law and Policy
  • Kan Yekaterina

The integration of artificial intelligence (AI) in healthcare presents a dual challenge: maximizing the efficiency of medical processes while safeguarding patient privacy. This comprehensive review examines the delicate balance between leveraging AI's potential in healthcare and preserving individual data privacy. Through analysis of recent literature, case studies, and regulatory frameworks, we explore the current landscape of AI applications in healthcare, associated privacy risks, and emerging solutions. Findings reveal that while AI significantly enhances diagnostic accuracy and treatment planning, it also raises concerns about data security and patient confidentiality. Key challenges include ensuring GDPR and HIPAA compliance, managing large-scale health data, and maintaining transparency in AI decision-making processes. Promising approaches such as federated learning and differential privacy emerge as potential solutions. This review underscores the need for a multidisciplinary approach involving healthcare providers, AI developers, ethicists, and policymakers to create robust, privacy-preserving AI systems in healthcare.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ejmp.2021.05.008
Focus issue: Artificial intelligence in medical physics.
  • Mar 1, 2021
  • Physica Medica
  • F Zanca + 11 more

Focus issue: Artificial intelligence in medical physics.

  • Research Article
  • Cite Count Icon 3
  • 10.3389/frai.2025.1577529
The imperative of diversity and equity for the adoption of responsible AI in healthcare.
  • Apr 16, 2025
  • Frontiers in artificial intelligence
  • Denise E Hilling + 5 more

Artificial Intelligence (AI) in healthcare holds transformative potential but faces critical challenges in ethical accountability and systemic inequities. Biases in AI models, such as lower diagnosis rates for Black women or gender stereotyping in Large Language Models, highlight the urgent need to address historical and structural inequalities in data and development processes. Disparities in clinical trials and datasets, often skewed toward high-income, English-speaking regions, amplify these issues. Moreover, the underrepresentation of marginalized groups among AI developers and researchers exacerbates these challenges. To ensure equitable AI, diverse data collection, federated data-sharing frameworks, and bias-correction techniques are essential. Structural initiatives, such as fairness audits, transparent AI model development processes, and early registration of clinical AI models, alongside inclusive global collaborations like TRAIN-Europe and CHAI, can drive responsible AI adoption. Prioritizing diversity in datasets and among developers and researchers, as well as implementing transparent governance will foster AI systems that uphold ethical principles and deliver equitable healthcare outcomes globally.

  • Research Article
  • Cite Count Icon 194
  • 10.1109/access.2021.3127881
A Systematic Review of Human–Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques
  • Jan 1, 2021
  • IEEE Access
  • Mobeen Nazar + 3 more

Artificial intelligence (AI) is one of the emerging technologies. In recent decades, artificial intelligence (AI) has gained widespread acceptance in a variety of fields, including virtual support, healthcare, and security. Human-Computer Interaction (HCI) is a field that has been combining AI and human-computer engagement over the past several years in order to create an interactive intelligent system for user interaction. AI, in conjunction with HCI, is being used in a variety of fields by employing various algorithms and employing HCI to provide transparency to the user, allowing them to trust the machine. The comprehensive examination of both the areas of AI and HCI, as well as their subfields, has been explored in this work. The main goal of this article was to discover a point of intersection between the two fields. The understanding of Explainable Artificial Intelligence (XAI), which is a linking point of HCI and XAI, was gained through a literature review conducted in this research. The literature survey encompassed themes identified in the literature (such as XAI and its areas, major XAI aims, and XAI problems and challenges). The study’s other major focus was on the use of AI, HCI, and XAI in healthcare. The poll also addressed the shortcomings in XAI in healthcare, as well as the field’s future potential. As a result, the literature indicates that XAI in healthcare is still a novel subject that has to be explored more in the future.

  • Book Chapter
  • 10.1007/978-3-030-74188-4_16
A Common Ground for Human Rights, AI, and Brain and Mental Health
  • Jan 1, 2021
  • Mónika Sziron

This chapter addresses the current and future challenges of implementing artificial intelligence (AI) in brain and mental health by exploring international regulations of healthcare and AI, and how human rights play a role in these regulations. First, a broad perspective of human rights in AI and human rights in healthcare is reviewed, then regulations of AI in healthcare are discussed, and finally applications of human rights in AI and brain and mental health regulations are considered. The foremost challenge in the blending and development of regulations of AI in healthcare is that currently both AI and healthcare lack accepted international-level regulation. It can be argued that human rights and human rights law are for the most part internationally accepted, and we can use these rights as guidelines for global regulations. However, as philosophical and ethical environments vary across nations, subsequent policies reflect varying conceptions and fulfillments of human rights. Like human rights, the recognized definitions of “AI” and “health” can vary across international borders and even vary within the professions themselves. One of the biggest challenges in the future of AI in brain and mental health will be applying human rights in a practical manner. Initially, the thought of applying human rights in the development of AI in healthcare seems straightforward. In order to develop better AI, better healthcare and, thus, better AI in healthcare, one must simply respect the human rights that are granted by various declarations, covenants, and constitutions. This is so seemingly straightforward that one would think this has already been the case in these developing fields. However, as we explore this notion of applying human rights, we find agreement, disagreement, and variability on a global scale. It is these variabilities that may well hamper the ethical development of AI in brain and mental health internationally.

  • Conference Article
  • 10.54941/ahfe1007088
Transparency for Trust: Enhancing Acceptance and System Integration of Intelligent AI in Healthcare
  • Jan 1, 2026
  • Nikita Islam + 6 more

The integration of intelligent systems into healthcare has transformed how diagnosis, therapy, and clinical decision-making are conceptualized and delivered. Artificial intelligence (AI) now supports a wide range of functions, from predictive analytics to personalized interventions. Despite these advances, the acceptance of AI in healthcare remains uneven, shaped not only by technical performance but also by the degree of transparency surrounding its capabilities and limitations. Without clear communication, trust becomes unstable, oscillating between overreliance and outright rejection. This paper examines transparency as the essential foundation for trust calibration, proposing that transparent AI systems enhance user confidence, preserve the therapeutic alliance, and ultimately contribute to better patient care. Building on prior research in neuroadaptive AI and virtual reality therapy for children with autism spectrum disorder, where transparent EEG-based engagement metrics increased acceptance by clinicians and caregivers, the authors argue that transparency should be understood as a core design principle for the system integration of intelligent AI in healthcare. A synthesis of literature across healthcare AI, trust-in-automation, and human–computer interaction demonstrates consistent evidence that transparency mechanisms improve acceptance. Studies on explainable AI indicate that visual explanations and confidence indicators significantly increase appropriate reliance while reducing the risks of miscalibration. The Human Identity and Autonomy Gap (HIAG) framework provides a valuable lens for interpreting these outcomes, illustrating how transparency mediates trust across cognitive, emotional, and social dimensions. Cognitively, transparency clarifies the reliability and scope of AI decision-making; emotionally, it reduces user uncertainty and anxiety; socially, it preserves clinician authority while fostering collaboration with patients and caregivers. Yet transparency must go beyond technical disclosure. Systems must communicate strengths and limitations, such as bias, data dependency, and contextual blind spots, while ensuring that transparency does not overwhelm users with excessive detail. Evidence also shows that transparency must be culturally adaptive, since trust and adoption vary across professional and cultural contexts, with some prioritizing certainty and governance while others value autonomy, discretion, and relational trust. This paper contributes to theory and practice by proposing design and policy guidelines that embed transparency into healthcare AI development. Strategies include adaptive interfaces that communicate uncertainty through confidence dashboards, culturally sensitive explanations that reflect global variability, and training modules that prepare clinicians and caregivers to interpret AI outputs responsibly. By positioning transparency as a prerequisite rather than an afterthought, intelligent systems can be integrated into healthcare workflows in ways that align with human values, safeguard professional autonomy, and foster equitable adoption across diverse settings. Ultimately, transparency transforms AI from a black-box technology into a trusted partner in healthcare innovation. These insights provide not only a conceptual framework for understanding trust calibration in AI-enabled healthcare but also a roadmap for developing intelligent systems that deliver meaningful, safe, and ethically grounded improvements in patient care, ensuring that future applications truly advance medical practice and human well-being.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.