Navigating ethical challenges in digital transformation: insights on climate adaptation, microbiology, healthcare, robotics, and AI under the EU AI act: an experts panel discussion

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT The ethical complexities of technological advancement are growing as fields such as climate adaptation, microbiology, healthcare, robotics, and artificial intelligence (AI) evolve rapidly. While these technologies offer innovative solutions to global challenges, they raise significant ethical concerns. In climate adaptation, AI-driven models and remote sensing technologies prompt questions about data privacy, environmental justice, and equitable access, especially for vulnerable populations. Similarly, advancements in microbiology and healthcare, such as genetic research and digital health tools, present ethical dilemmas related to informed consent, data security, and the exploitation of marginalized communities. In robotics and AI, ethical concerns are heightened due to their potential to automate decision-making, affect employment, and infringe on personal freedoms. The influence of AI in healthcare, law enforcement, and public services highlights the urgent need for ethical oversight to prevent bias and protect human rights. The EU AI Act addresses these challenges by categorizing AI systems by risk and setting stringent guidelines for high-risk applications, especially in sensitive sectors like healthcare. This article emphasizes the importance of balancing innovation with ethical responsibility, advocating for comprehensive regulatory frameworks, interdisciplinary collaboration, and global cooperation to ensure that technological advancements align with ethical standards and societal values.

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.3389/frai.2024.1442254
Assuring assistance to healthcare and medicine: Internet of Things, Artificial Intelligence, and Artificial Intelligence of Things.
  • Dec 13, 2024
  • Frontiers in artificial intelligence
  • Poshan Belbase + 4 more

The convergence of healthcare with the Internet of Things (IoT) and Artificial Intelligence (AI) is reshaping medical practice with promising enhanced data-driven insights, automated decision-making, and remote patient monitoring. It has the transformative potential of these technologies to revolutionize diagnosis, treatment, and patient care. This study aims to explore the integration of IoT and AI in healthcare, outlining their applications, benefits, challenges, and potential risks. By synthesizing existing literature, this study aims to provide insights into the current landscape of AI, IoT, and AIoT in healthcare, identify areas for future research and development, and establish a framework for the effective use of AI in health. A comprehensive literature review included indexed databases such as PubMed/Medline, Scopus, and Google Scholar. Key search terms related to IoT, AI, healthcare, and medicine were employed to identify relevant studies. Papers were screened based on their relevance to the specified themes, and eventually, a selected number of papers were methodically chosen for this review. The integration of IoT and AI in healthcare offers significant advancements, including remote patient monitoring, personalized medicine, and operational efficiency. Wearable sensors, cloud-based data storage, and AI-driven algorithms enable real-time data collection, disease diagnosis, and treatment planning. However, challenges such as data privacy, algorithmic bias, and regulatory compliance must be addressed to ensure responsible deployment of these technologies. Integrating IoT and AI in healthcare holds immense promise for improving patient outcomes and optimizing healthcare delivery. Despite challenges such as data privacy concerns and algorithmic biases, the transformative potential of these technologies cannot be overstated. Clear governance frameworks, transparent AI decision-making processes, and ethical considerations are essential to mitigate risks and harness the full benefits of IoT and AI in healthcare.

  • Research Article
  • Cite Count Icon 2
  • 10.54254/2753-8818/21/20230845
Artificial intelligence in healthcare: Opportunities and challenges
  • Dec 20, 2023
  • Theoretical and Natural Science
  • Huimin Zhang

The development of Artificial Intelligence (AI) in healthcare has had a significant impact on healthcare. AI in healthcare can provide more accurate diagnoses and interventions for patients. AI can predict, diagnose, and treat diseases, facilitate the maximum use of healthcare resources by integrating medical information, increase efficiency, and reduce overcrowding of healthcare resources. However, the application of AI in healthcare also faces challenges such as accountability, algorithmic security, and data privacy. This paper discusses the application of AI in healthcare and explores the challenges faced by AI, in-cluding accountability traceability, algorithmic safety, data security, and ethical issues, and makes targeted recommendations. This study provides an in-depth exploration of the application of AI in healthcare, helping to improve the accuracy and efficiency of AI ap-plications in healthcare, as well as providing necessary guidance and references for opti-mizing and enhancing AI technologies.

  • Research Article
  • Cite Count Icon 33
  • 10.2196/53616
Benefits and Risks of AI in Health Care: Narrative Review.
  • Nov 18, 2024
  • Interactive journal of medical research
  • Margaret Chustecki

The integration of artificial intelligence (AI) into health care has the potential to transform the industry, but it also raises ethical, regulatory, and safety concerns. This review paper provides an in-depth examination of the benefits and risks associated with AI in health care, with a focus on issues like biases, transparency, data privacy, and safety. This study aims to evaluate the advantages and drawbacks of incorporating AI in health care. This assessment centers on the potential biases in AI algorithms, transparency challenges, data privacy issues, and safety risks in health care settings. Studies included in this review were selected based on their relevance to AI applications in health care, focusing on ethical, regulatory, and safety considerations. Inclusion criteria encompassed peer-reviewed articles, reviews, and relevant research papers published in English. Exclusion criteria included non-peer-reviewed articles, editorials, and studies not directly related to AI in health care. A comprehensive literature search was conducted across 8 databases: OVID MEDLINE, OVID Embase, OVID PsycINFO, EBSCO CINAHL Plus with Full Text, ProQuest Sociological Abstracts, ProQuest Philosopher's Index, ProQuest Advanced Technologies & Aerospace, and Wiley Cochrane Library. The search was last updated on June 23, 2023. Results were synthesized using qualitative methods to identify key themes and findings related to the benefits and risks of AI in health care. The literature search yielded 8796 articles. After removing duplicates and applying the inclusion and exclusion criteria, 44 studies were included in the qualitative synthesis. This review highlights the significant promise that AI holds in health care, such as enhancing health care delivery by providing more accurate diagnoses, personalized treatment plans, and efficient resource allocation. However, persistent concerns remain, including biases ingrained in AI algorithms, a lack of transparency in decision-making, potential compromises of patient data privacy, and safety risks associated with AI implementation in clinical settings. In conclusion, while AI presents the opportunity for a health care revolution, it is imperative to address the ethical, regulatory, and safety challenges linked to its integration. Proactive measures are required to ensure that AI technologies are developed and deployed responsibly, striking a balance between innovation and the safeguarding of patient well-being.

  • Research Article
  • Cite Count Icon 18
  • 10.1186/s12909-024-06035-4
Global cross-sectional student survey on AI in medical, dental, and veterinary education and practice at 192 faculties
  • Sep 28, 2024
  • BMC Medical Education
  • Felix Busch + 99 more

BackgroundThe successful integration of artificial intelligence (AI) in healthcare depends on the global perspectives of all stakeholders. This study aims to answer the research question: What are the attitudes of medical, dental, and veterinary students towards AI in education and practice, and what are the regional differences in these perceptions?MethodsAn anonymous online survey was developed based on a literature review and expert panel discussions. The survey assessed students' AI knowledge, attitudes towards AI in healthcare, current state of AI education, and preferences for AI teaching. It consisted of 16 multiple-choice items, eight demographic queries, and one free-field comment section. Medical, dental, and veterinary students from various countries were invited to participate via faculty newsletters and courses. The survey measured technological literacy, AI knowledge, current state of AI education, preferences for AI teaching, and attitudes towards AI in healthcare using Likert scales. Data were analyzed using descriptive statistics, Mann–Whitney U-test, Kruskal–Wallis test, and Dunn-Bonferroni post hoc test.ResultsThe survey included 4313 medical, 205 dentistry, and 78 veterinary students from 192 faculties and 48 countries. Most participants were from Europe (51.1%), followed by North/South America (23.3%) and Asia (21.3%). Students reported positive attitudes towards AI in healthcare (median: 4, IQR: 3–4) and a desire for more AI teaching (median: 4, IQR: 4–5). However, they had limited AI knowledge (median: 2, IQR: 2–2), lack of AI courses (76.3%), and felt unprepared to use AI in their careers (median: 2, IQR: 1–3). Subgroup analyses revealed significant differences between the Global North and South (r = 0.025 to 0.185, all P < .001) and across continents (r = 0.301 to 0.531, all P < .001), with generally small effect sizes.ConclusionsThis large-scale international survey highlights medical, dental, and veterinary students' positive perceptions of AI in healthcare, their strong desire for AI education, and the current lack of AI teaching in medical curricula worldwide. The study identifies a need for integrating AI education into medical curricula, considering regional differences in perceptions and educational needs.Trial registrationNot applicable (no clinical trial).

  • Research Article
  • Cite Count Icon 3
  • 10.32629/jai.v7i5.1535
A meta-study on optimizing healthcare performance with artificial intelligence and machine learning
  • Mar 7, 2024
  • Journal of Autonomous Intelligence
  • Bongs Lainjo

&lt;p&gt;This study explores the transformative impact of Artificial Intelligence (AI) and Machine Learning (ML) in healthcare, focusing on enhancing patient care through operational efficiency and medical innovation. Employing a meta-study approach, it comprehensively analyzes the applications and ethical aspects of AI and ML in healthcare, highlighting successful implementations like IBM Watson for Oncology and Google DeepMind’s AlphaFold. The research emphasizes AI’s significant contributions to diagnostics, precision medicine, and medical imaging interpretation, alongside its role in optimizing healthcare operations and enabling personalized medicine through data analysis. However, it also addresses challenges such as algorithmic bias, safety, data privacy, and the need for regulatory frameworks. The study underlines the importance of continued research, interdisciplinary collaboration, and adaptive regulations to ensure the responsible and ethical use of AI and ML in healthcare.&lt;/p&gt;

  • Research Article
  • Cite Count Icon 8
  • 10.1001/jamanetworkopen.2025.14452
Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients
  • Jun 10, 2025
  • JAMA Network Open
  • Lena Hoffmann + 99 more

The successful implementation of artificial intelligence (AI) in health care depends on its acceptance by key stakeholders, particularly patients, who are the primary beneficiaries of AI-driven outcomes. To survey hospital patients to investigate their trust, concerns, and preferences toward the use of AI in health care and diagnostics and to assess the sociodemographic factors associated with patient attitudes. This cross-sectional study developed and implemented an anonymous quantitative survey between February 1 and November 1, 2023, using a nonprobability sample at 74 hospitals in 43 countries. Participants included hospital patients 18 years of age or older who agreed with voluntary participation in the survey presented in 1 of 26 languages. Information sheets and paper surveys handed out by hospital staff and posted in conspicuous hospital locations. The primary outcome was participant responses to a 26-item instrument containing a general data section (8 items) and 3 dimensions (trust in AI, AI and diagnosis, preferences and concerns toward AI) with 6 items each. Subgroup analyses used cumulative link mixed and binary mixed-effects models. In total, 13 806 patients participated, including 8951 (64.8%) in the Global North and 4855 (35.2%) in the Global South. Their median (IQR) age was 48 (34-62) years, and 6973 (50.5%) were male. The survey results indicated a predominantly favorable general view of AI in health care, with 57.6% of respondents (7775 of 13 502) expressing a positive attitude. However, attitudes exhibited notable variation based on demographic characteristics, health status, and technological literacy. Female respondents (3511 of 6318 [55.6%]) exhibited fewer positive attitudes toward AI use in medicine than male respondents (4057 of 6864 [59.1%]), and participants with poorer health status exhibited fewer positive attitudes toward AI use in medicine (eg, 58 of 199 [29.2%] with rather negative views) than patients with very good health (eg, 134 of 2538 [5.3%] with rather negative views). Conversely, higher levels of AI knowledge and frequent use of technology devices were associated with more positive attitudes. Notably, fewer than half of the participants expressed positive attitudes regarding all items pertaining to trust in AI. The lowest level of trust was observed for the accuracy of AI in providing information regarding treatment responses (5637 of 13 480 respondents [41.8%] trusted AI). Patients preferred explainable AI (8816 of 12 563 [70.2%]) and physician-led decision-making (9222 of 12 652 [72.9%]), even if it meant slightly compromised accuracy. In this cross-sectional study of patient attitudes toward AI use in health care across 6 continents, findings indicated that tailored AI implementation strategies should take patient demographics, health status, and preferences for explainable AI and physician oversight into account.

  • Research Article
  • 10.51731/cjht.2024.1032
RapidAI for Stroke Detection and AI Implementation Review
  • Nov 22, 2024
  • Canadian Journal of Health Technologies
  • Cda-Amc

RapidAI Review for Stroke Detection What Is the Issue? Stroke is a sudden loss of neurologic function caused by poor or interrupted blood flow within the brain. It is 1 of the leading causes of death and a major cause of disability in Canada. For patients with suspected stroke, prompt evaluation using CT imaging and other tests can help to determine the type of stroke, to assess the severity of damage, and to guide treatment decisions. RapidAI is an artificial intelligence (AI)–enabled software platform that facilitates the viewing, processing, and analysis of CT images to aid clinicians in assessing patients with suspected stroke. Understanding the potential benefits and harms of using RapidAI is important to clarify its role in stroke detection. What Did We Do? We sought to identify, synthesize, and critically appraise literature evaluating the effectiveness, accuracy, and cost-effectiveness of RapidAI for detecting large-vessel occlusion (LVO) (i.e., ischemic stroke) and intracranial hemorrhage (ICH) (i.e., hemorrhagic stroke). We searched key resources, including journal citation databases, and conducted a focused internet search for relevant evidence published up to July 22, 2024. We screened citations for inclusion based on predefined criteria, critically appraised the included studies, narratively summarized the findings, and assessed the certainty of evidence. Our methods were guided by the Scottish Health Technologies Group’s health technology assessment (HTA) framework. We highlighted and reflected on the ethical and equity implications of using RapidAI for stroke detection, found in the clinical literature, integrating these considerations throughout the review. We engaged a patient contributor who had experienced a hemorrhagic stroke, to learn about her experience, perspectives, and priorities. Additionally, we incorporated feedback from clinical and ethics experts, the manufacturer, and other interested parties. What Did We Find? We found 2 cohort studies and 11 diagnostic accuracy studies that assessed the effectiveness and accuracy of RapidAI for detecting stroke. Among these, 3 studies evaluated RapidAI as it is intended to be used in clinical practice (i.e., to complement clinician interpretation of CT images), while the remaining 10 studies assessed RapidAI as a standalone intervention. The patient contributor identified important outcomes for stroke care, including improving speed and accuracy of diagnosis, minimizing the damaging effects of stroke, and reducing mortality rates. She also highlighted ethical considerations regarding the use of AI in health care, such as providing data privacy and equitable access, as well as informing patients about the use of AI technologies in the care pathway. Low-certainty evidence suggests that evaluation of CT angiography images by Rapid LVO combined with clinician interpretation, compared to clinician interpretation alone, may result in clinically important reductions in radiology-report turnaround time in patients with suspected stroke. For detecting ICH, low-certainty evidence suggests that Rapid ICH combined with clinician interpretation, using clinician interpretation as a reference standard, has a sensitivity of 92% (95% confidence interval [CI], 78% to 98%) and a specificity of 100% (95% CI, 98% to 100%). However, estimates of sensitivity and specificity for detecting LVO varied, based on studies using different modules of RapidAI as a standalone intervention, providing only indirect accuracy data. The effects of RapidAI on other time-to-intervention metrics, measures of physical and cognitive function, and response to therapy (e.g., reperfusion rates) were very uncertain. We did not identify any evidence on the effects of RapidAI on many important clinical outcomes, including patient harms, mortality, health-related quality of life, length of hospital stay, or health care resource implications. We did not find any studies on the cost-effectiveness of RapidAI for detecting stroke that met our selection criteria for this review. Ethical and equity considerations related to patient autonomy, privacy, transparency, access, and algorithmic bias have implications across the technology life cycle when using RapidAI for detecting stroke. What Does This Mean? RapidAI has the potential to improve acute stroke care by creating efficiencies in the diagnostic process. However, the impact of RapidAI on many outcomes, including those that are important to patients, is uncertain due to limitations of the available evidence. To improve the certainty of findings, there is a need for evidence from robustly conducted studies at lower risk of bias that enrol diverse patient populations and measure outcomes that are important to patients, with improved reporting. The cost-effectiveness of RapidAI for stroke detection is currently unknown. In addition to the evidence on the effectiveness and accuracy of RapidAI for detecting stroke, decision-makers may wish to reflect on the ethical and equity considerations that arise during the deployment of AI-enabled technologies, such as those related to autonomy, privacy, transparency, and explainability of machine-learning models, and the need for considerations related to equity and access in their design, development, and deployment. AI Implementation Review What Is the Issue? Globally, we are seeing a widespread increase in the interest, development, and use of artificial intelligence (AI)–enabled medical devices. Comprehensive evaluation through health technology assessment (HTA) can ensure that digital health technologies (DHTs), including AI-enabled medical devices, are adequately equipped to balance benefits and harms, while being interoperable and equitably accessible to people living in Canada. In the UK, a checklist called Digital Technology Assessment Criteria (DTAC) is used as an add-on component to HTAs to capture additional considerations for the implementation of DHTs. The 5 core areas of DTAC are clinical safety, data protection, technical security, interoperability, and usability and accessibility. In Canada, we currently do not have a DTAC equivalent that can be used as an add-on to traditional HTA. This implementation review is needed to assist health systems in Canada in preparing for the uptake of AI-enabled medical devices, as these technologies pose new challenges. We assessed whether the safeguards and assessment criteria captured by DTAC and other AI-related resources are in place to inform decision-making around the digital infrastructure elements of implementation. What Did We Do? We conducted an implementation review, using a phased approach, to determine whether DTAC can be applied to the health care context in Canada to inform the implementation of DHTs and to identify any additional implementation considerations specific to the use of AI-enabled medical devices in Canada. We integrated ethics and equity considerations across both phases of the review. In phase 1, we applied DTAC to the health care context in Canada by determining whether we have equivalent or similar measures, strategies, and policies in place to implement DHTs safely. In phase 2, an information specialist searched for literature to identify implementation guidance specific to AI and relevant to Canada to supplement DTAC. One reviewer screened publications for inclusion based on predefined criteria, incorporated relevant information into tables, and summarized the findings narratively. We leveraged patient engagement activities conducted in a concurrent Canada’s Drug Agency review of a specific AI-enabled medical device in stroke detection to learn from a patient contributor with lived experience of a hemorrhagic stroke. We learned about her experience, perspectives, priorities, and thoughts about using AI in clinical decision-making. What Did We Find? With some caveats, we found that many of DTAC’s assessment criteria have equivalent or similar guidance for the health care context in Canada. Some exceptions are derived from the differences in Canada’s current governance and health care structure. Further investigation is required to understand whether certain policies in Canada provide sufficient coverage to fulfill DTAC’s criteria (e.g., clinical safety). We identified several considerations for implementing AI-enabled medical devices, with many having underlying ethical and equity implications. Much of the identified guidance emphasizes implementation considerations that apply to the AI system’s entire life cycle, including the most prevalent consideration: ensuring AI-enabled medical devices are monitored, maintained, and sustainable. Examples of additional considerations include AI data governance and data protection; transparency and explainability; and inclusiveness, equity, and minimization of bias. The patient contributor highlighted several considerations relevant for this review, such as data protection and privacy as well as accessibility and equity. What Does This Mean? We have identified key considerations for AI-enabled medical devices that health care decision-makers may consider for the safe and successful implementation of AI in health care in Canada. While Canada has DTAC-equivalent or similar measures, strategies, or policies in place, we identified a need for a checklist like DTAC that senior decision-makers can use. This checklist could be an adaptation of DTAC and could include additional implementation considerations for AI-enabled medical devices to ensure that these technologies meet the minimum baseline standards set out by DTAC and inform the next steps for the safe and successful implementation of AI-enabled medical devices in Canada. This implementation review for all AI-enabled medical devices is to be used alongside reviews of specific AI technologies, including the concurrent review of RapidAI, and will serve as a foundational report to be tailored for each AI topic and updated with the latest developments in the regulation and other aspects of management of AI in the context of Canada.

  • Research Article
  • Cite Count Icon 26
  • 10.3389/fgene.2022.902542
"Democratizing" artificial intelligence in medicine and healthcare: Mapping the uses of an elusive term.
  • Aug 15, 2022
  • Frontiers in genetics
  • Giovanni Rubeis + 2 more

Introduction: “Democratizing” artificial intelligence (AI) in medicine and healthcare is a vague term that encompasses various meanings, issues, and visions. This article maps the ways this term is used in discourses on AI in medicine and healthcare and uses this map for a normative reflection on how to direct AI in medicine and healthcare towards desirable futures. Methods: We searched peer-reviewed articles from Scopus, Google Scholar, and PubMed along with grey literature using search terms “democrat*”, “artificial intelligence” and “machine learning”. We approached both as documents and analyzed them qualitatively, asking: What is the object of democratization? What should be democratized, and why? Who is the demos who is said to benefit from democratization? And what kind of theories of democracy are (tacitly) tied to specific uses of the term? Results: We identified four clusters of visions of democratizing AI in healthcare and medicine: 1) democratizing medicine and healthcare through AI, 2) multiplying the producers and users of AI, 3) enabling access to and oversight of data, and 4) making AI an object of democratic governance. Discussion: The envisioned democratization in most visions mainly focuses on patients as consumers and relies on or limits itself to free market-solutions. Democratization in this context requires defining and envisioning a set of social goods, and deliberative processes and modes of participation to ensure that those affected by AI in healthcare have a say on its development and use.

  • Book Chapter
  • Cite Count Icon 6
  • 10.1007/978-3-030-81907-1_18
The Ethics of AI in Health Care: A Mapping Review
  • Jan 1, 2021
  • Jessica Morley + 6 more

This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched (Scopus, Google Scholar, Philpapers, Web of Science, Pub Med), in April 2019, to support the following research question: “how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be ‘ethically mindful?’”. A series of screening stages were carried out—for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)—yielding a total of 156 papers that were included in the review.We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. This article contributes to the debate on AI in health care by offering a comprehensive analysis of the relevant literature, focusing on the ethical implications for individuals, interpersonal relationships, groups, institutions, societies and the health sector as a whole. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new ‘AI winter’ could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.47941/ijhs.1949
AI and Machine Learning in Healthcare - Applications, Challenges and Ethics
  • Jun 4, 2024
  • International Journal of Health Sciences
  • Swapna Nadakuditi + 2 more

Purpose: This research aims to discuss how AI and machine learning can be used in healthcare, challenges associated with implementation and the ethics around the widespread adoption of AI in the health care ecosystem while understanding the regulations around the technology implementation. Methodology: By conducting qualitative analysis on various applications of AI and machine learning in health care and its impacts on patient care, the analysis summarizes the challenges and ethics associated with the implementation. Findings: Results indicate that in the last few years, the data collected in the healthcare industry has increased manifold. Some studies suggest that structured data is growing by 40% each year, unstructured data is growing by over 80% and global data produced is forty zettabytes (ZB) as of 2020. With the increased regulatory and compliance requirements, effective data governance is a mandate for industries like healthcare where there is greater focus on data privacy, data security and personal information protection. This rapid explosion of data and the need to ensure the data is available at the right time has led to increased adoption of artificial intelligence (AI) and machine learning solutions across healthcare organizations to gain meaningful insights from the data collected. These technologies are proving to transform many aspects of healthcare ecosystem from patient care to administrative functions. Unique contribution to theory, policy, and practice: Currently AI and machine learning are aiding providers and patients by improving the health outcomes, but further research is necessary to validate to ensure these technologies are complying the regulatory guidelines without comprising on the patient care and the ethics involved when it comes to patient security and privacy.

  • Research Article
  • Cite Count Icon 2
  • 10.1200/edbk-25-481490
Artificial Intelligence in the Clinic: Creating Harmony or Just Adding Noise?
  • Jun 1, 2025
  • American Society of Clinical Oncology educational book. American Society of Clinical Oncology. Annual Meeting
  • Mariam Afzal + 3 more

Although still limited, the integration of artificial intelligence (AI) in health care has rapidly expanded in the past few years, especially in oncology clinics. In this article, AI refers to the development and implementation of computer systems capable of performing tasks that typically require human intelligence, such as language understanding, learning, and reasoning. AI technology is currently being used as ambient listening technology (AI-driven systems that passively capture verbal interactions between patients and health care providers), patient messaging chatbots (AI-enabled conversational agents designed to interact with patients by text or voice platforms), and as tools for inbox management and patient care delivery. However, the question remains: Is AI truly fostering harmony in health care, or just adding noise to an already complex system? Although the current applications of this technology have shown promising results in affecting routine care provided by physicians, this article will focus on AI's broader impact on the health care system-highlighting how ambient listening technology can improve the clinical experience for both patients and physicians, whether AI can reduce physician burnout through minimizing in-basket workload (the volume of messages that clinicians must manage within the electronic health record system), and AI's usage as a diagnostic tool. Key concerns addressed in this article include the potential pitfalls associated with AI integration, such as the need for proper clinician training to optimize AI algorithms while ensuring patient safety. The ambiguities surrounding the disclosure of AI in health care and the lack of a legal framework also raise significant concerns regarding patient autonomy, data privacy, trust, and beneficence. Future directions of AI in addressing these challenges are explored, alongside its potential integration into overburdened hospitals, underserved communities, telemedicine, and rural health care settings.

  • Book Chapter
  • 10.1007/978-3-030-74188-4_16
A Common Ground for Human Rights, AI, and Brain and Mental Health
  • Jan 1, 2021
  • Mónika Sziron

This chapter addresses the current and future challenges of implementing artificial intelligence (AI) in brain and mental health by exploring international regulations of healthcare and AI, and how human rights play a role in these regulations. First, a broad perspective of human rights in AI and human rights in healthcare is reviewed, then regulations of AI in healthcare are discussed, and finally applications of human rights in AI and brain and mental health regulations are considered. The foremost challenge in the blending and development of regulations of AI in healthcare is that currently both AI and healthcare lack accepted international-level regulation. It can be argued that human rights and human rights law are for the most part internationally accepted, and we can use these rights as guidelines for global regulations. However, as philosophical and ethical environments vary across nations, subsequent policies reflect varying conceptions and fulfillments of human rights. Like human rights, the recognized definitions of “AI” and “health” can vary across international borders and even vary within the professions themselves. One of the biggest challenges in the future of AI in brain and mental health will be applying human rights in a practical manner. Initially, the thought of applying human rights in the development of AI in healthcare seems straightforward. In order to develop better AI, better healthcare and, thus, better AI in healthcare, one must simply respect the human rights that are granted by various declarations, covenants, and constitutions. This is so seemingly straightforward that one would think this has already been the case in these developing fields. However, as we explore this notion of applying human rights, we find agreement, disagreement, and variability on a global scale. It is these variabilities that may well hamper the ethical development of AI in brain and mental health internationally.

  • Research Article
  • Cite Count Icon 9
  • 10.2196/56306
Finding Consensus on Trust in AI in Health Care: Recommendations From a Panel of International Experts.
  • Feb 19, 2025
  • Journal of medical Internet research
  • Georg Starke + 17 more

The integration of artificial intelligence (AI) into health care has become a crucial element in the digital transformation of health systems worldwide. Despite the potential benefits across diverse medical domains, a significant barrier to the successful adoption of AI systems in health care applications remains the prevailing low user trust in these technologies. Crucially, this challenge is exacerbated by the lack of consensus among experts from different disciplines on the definition of trust in AI within the health care sector. We aimed to provide the first consensus-based analysis of trust in AI in health care based on an interdisciplinary panel of experts from different domains. Our findings can be used to address the problem of defining trust in AI in health care applications, fostering the discussion of concrete real-world health care scenarios in which humans interact with AI systems explicitly. We used a combination of framework analysis and a 3-step consensus process involving 18 international experts from the fields of computer science, medicine, philosophy of technology, ethics, and social sciences. Our process consisted of a synchronous phase during an expert workshop where we discussed the notion of trust in AI in health care applications, defined an initial framework of important elements of trust to guide our analysis, and agreed on 5 case studies. This was followed by a 2-step iterative, asynchronous process in which the authors further developed, discussed, and refined notions of trust with respect to these specific cases. Our consensus process identified key contextual factors of trust, namely, an AI system's environment, the actors involved, and framing factors, and analyzed causes and effects of trust in AI in health care. Our findings revealed that certain factors were applicable across all discussed cases yet also pointed to the need for a fine-grained, multidisciplinary analysis bridging human-centered and technology-centered approaches. While regulatory boundaries and technological design features are critical to successful AI implementation in health care, ultimately, communication and positive lived experiences with AI systems will be at the forefront of user trust. Our expert consensus allowed us to formulate concrete recommendations for future research on trust in AI in health care applications. This paper advocates for a more refined and nuanced conceptual understanding of trust in the context of AI in health care. By synthesizing insights into commonalities and differences among specific case studies, this paper establishes a foundational basis for future debates and discussions on trusting AI in health care.

  • Research Article
  • Cite Count Icon 1
  • 10.59022/ujldp.63
Legal Application of Artificial Intelligence in Healthcare
  • Feb 28, 2023
  • Uzbek Journal of Law and Digital Policy
  • Ekaterina Kan

The integration of artificial intelligence (AI) in healthcare has the potential to revolutionize the industry by improving patient outcomes and increasing efficiency. However, the rapid development and implementation of AI technologies raise complex legal issues and challenges. This article explores the key legal aspects of AI integration in healthcare, including data privacy and security, liability and accountability, intellectual property, and regulatory compliance. It examines relevant international and national legal instruments, regulations, and guidelines, as well as industry-specific standards that apply to AI in healthcare. The study also analyzes case studies and practical applications to highlight legal challenges and resolutions, lessons learned, and best practices. The discussion addresses the implications of the results, comparing the legal landscape for AI in healthcare to other industries and countries and highlighting potential future legal developments and challenges. The conclusion summarizes key findings, offers recommendations for integrating AI in healthcare systems while addressing legal concerns, and proposes future directions for legal research and policy development in the context of AI and healthcare. This comprehensive analysis aims to inform healthcare providers, AI developers, and policymakers on the legal landscape surrounding AI in healthcare, providing valuable insights to navigate this complex domain and harness the potential of AI to transform healthcare delivery.

  • Research Article
  • 10.58252/artukluhealth.1756166
Artificial Intelligence in Healthcare Management: Leadership Transformation and Strategic Directions
  • Dec 31, 2025
  • Artuklu Health
  • Zübeyde Ağalday + 1 more

Introduction: Artificial intelligence has rapidly gained importance as a transformative force in healthcare, influencing not only clinical processes but also management practices, leadership models, and strategic decision-making. This review explores the evolving role of AI in health management, focusing on its impact on institutional transformation, leadership paradigms, and strategic orientations. Methods: This article adopts a narrative literature review approach to synthesize recent theoretical and empirical studies on artificial intelligence and leadership in healthcare. Peer-reviewed studies published between 2017 and 2025 were identified through databases such as PubMed, Scopus, and Web of Science, using keywords including "artificial intelligence in healthcare," "healthcare leadership," "digital transformation," and "strategic management in healthcare." Studies were selected based on their relevance to AI's role in organizational change and leadership development in health systems. Results: The reviewed literature identifies three major themes: (1) the integration of AI in healthcare operations, including resource allocation, patient flow management, and crisis response; (2) the transformation of leadership styles from hierarchical to data-driven, agile, and ethically responsible models; and (3) the strategic positioning of AI in fostering sustainable, inclusive, and future-oriented organizational cultures. These findings suggest a shift in leadership expectations from operational control to strategic vision and ethical AI governance. Conclusion: AI is reshaping health management by enabling leaders to develop strategic foresight, support evidence-based decision-making, and drive digital transformation. The success of AI integration depends not only on technological adoption but also on ethical frameworks, organizational learning, and leadership vision. Future healthcare leaders should combine digital competencies and emotional intelligence with a human-centered approach to leadership.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.