Ethical and Legal Governance of Generative AI in Chinese Healthcare

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The application of generative artificial intelligence (AI) technology in the healthcare sector can significantly enhance the efficiency of China’s healthcare services. However, risks persist in terms of accuracy, transparency, data privacy, ethics, and bias. These risks are manifested in three key areas: first, the potential erosion of human agency; second, issues of fairness and justice; and third, questions of liability and responsibility. This study reviews and analyzes the legal and regulatory frameworks established in China for the application of generative AI in healthcare, as well as relevant academic literature. Our research findings indicate that while China is actively constructing an ethical and legal governance framework in this field, the regulatory system remains inadequate and faces numerous challenges. These challenges include lagging regulatory rules; an unclear legal status of AI in laws such as the Civil Code; immature standards and regulatory schemes for medical AI training data; and the lack of a coordinated regulatory mechanism among different government departments. In response, this study attempts to establish a governance framework for generative AI in the medical field in China from both legal and ethical perspectives, yielding relevant research findings. Given the latest developments in generative AI in China, it is necessary to address the challenges of its application in the medical field from both ethical and legal perspectives. This includes enhancing algorithm transparency, standardizing medical data management, and promoting AI legislation. As AI technology continues to evolve, more diverse technical models will emerge in the future. This study also proposes that to address potential risks associated with medical AI, efforts should be made to establish a global AI ethics review committee to promote the formation of internationally unified ethical and legal review mechanisms.

ReferencesShowing 10 of 38 papers
  • Open Access Icon
  • Cite Count Icon 228
  • 10.1002/mds.27376
Evaluation of smartphone-based testing to generate exploratory outcome measures in a phase 1 Parkinson's disease clinical trial.
  • Apr 27, 2018
  • Movement Disorders
  • Florian Lipsmeier + 21 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 51
  • 10.2196/40031
Artificial Intelligence in Emergency Medicine: Viewpoint of Current Applications and Foreseeable Opportunities and Challenges.
  • May 23, 2023
  • Journal of medical Internet research
  • Gabrielle Chenais + 2 more

  • Cite Count Icon 3
  • 10.1136/jme-2023-109737
Can large language models help solve the cost problem for the right to explanation?
  • Sep 12, 2024
  • Journal of Medical Ethics
  • Lauritz Munch + 1 more

  • Open Access Icon
  • Cite Count Icon 3541
  • 10.1126/science.aax2342
Dissecting racial bias in an algorithm used to manage the health of populations.
  • Oct 24, 2019
  • Science
  • Ziad Obermeyer + 3 more

  • Open Access Icon
  • Cite Count Icon 665
  • 10.1136/jnnp.2006.103788
Gender differences in Parkinson’s disease
  • Nov 10, 2006
  • Journal of Neurology, Neurosurgery & Psychiatry
  • C A Haaxma + 8 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 7
  • 10.1007/s11948-024-00486-0
AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare
  • Jan 1, 2024
  • Science and Engineering Ethics
  • Laura Arbelaez Ossa + 5 more

  • Cite Count Icon 1
  • 10.1108/jhom-01-2025-0007
Exploring the impact of generative AI tools on healthcare delivery in Tanzania
  • Apr 1, 2025
  • Journal of Health Organization and Management
  • Nima Shidende + 1 more

  • Open Access Icon
  • Cite Count Icon 179
  • 10.1136/medethics-2018-105118
Computer knows best? The need for value-flexibility in medical AI
  • Feb 22, 2019
  • Journal of Medical Ethics
  • Rosalind J Mcdougall

  • Open Access Icon
  • Cite Count Icon 17
  • 10.4103/singaporemedj.smj-2023-279
Ethics of artificial intelligence in medicine.
  • Mar 1, 2024
  • Singapore medical journal
  • Julian Savulescu + 3 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 80
  • 10.3390/healthcare12050562
Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector.
  • Feb 28, 2024
  • Healthcare
  • Kavitha Palaniappan + 2 more

Similar Papers
  • Research Article
  • Cite Count Icon 25
  • 10.3389/fgene.2022.902542
"Democratizing" artificial intelligence in medicine and healthcare: Mapping the uses of an elusive term.
  • Aug 15, 2022
  • Frontiers in genetics
  • Giovanni Rubeis + 2 more

Introduction: “Democratizing” artificial intelligence (AI) in medicine and healthcare is a vague term that encompasses various meanings, issues, and visions. This article maps the ways this term is used in discourses on AI in medicine and healthcare and uses this map for a normative reflection on how to direct AI in medicine and healthcare towards desirable futures. Methods: We searched peer-reviewed articles from Scopus, Google Scholar, and PubMed along with grey literature using search terms “democrat*”, “artificial intelligence” and “machine learning”. We approached both as documents and analyzed them qualitatively, asking: What is the object of democratization? What should be democratized, and why? Who is the demos who is said to benefit from democratization? And what kind of theories of democracy are (tacitly) tied to specific uses of the term? Results: We identified four clusters of visions of democratizing AI in healthcare and medicine: 1) democratizing medicine and healthcare through AI, 2) multiplying the producers and users of AI, 3) enabling access to and oversight of data, and 4) making AI an object of democratic governance. Discussion: The envisioned democratization in most visions mainly focuses on patients as consumers and relies on or limits itself to free market-solutions. Democratization in this context requires defining and envisioning a set of social goods, and deliberative processes and modes of participation to ensure that those affected by AI in healthcare have a say on its development and use.

  • Research Article
  • 10.2196/71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop
  • Jun 2, 2025
  • Journal of Medical Internet Research
  • Melanie Goisauf + 10 more

Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.

  • Preprint Article
  • 10.2196/preprints.71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop (Preprint)
  • Jan 13, 2025
  • Melanie Goisauf + 10 more

UNSTRUCTURED Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.artmed.2025.103169
From black box to clarity: Strategies for effective AI informed consent in healthcare.
  • Sep 1, 2025
  • Artificial intelligence in medicine
  • M Chau + 2 more

From black box to clarity: Strategies for effective AI informed consent in healthcare.

  • Research Article
  • 10.1108/lhs-01-2025-0018
Responsible artificial intelligence (AI) in healthcare: a paradigm shift in leadership and strategic management
  • Sep 9, 2025
  • Leadership in Health Services
  • Amlan Haque

Purpose This paper aims to explore the paradigm shift in leadership and strategic management driven by the integration of responsible artificial intelligence (AI) in healthcare. It explores the evolving role of leadership in adapting to AI technologies while ensuring ethical governance, transparency and accountability in healthcare decision-making. Design/methodology/approach This study conducts a comprehensive review of current literature, case studies and industry reports to evaluate the implications of responsible AI adoption in healthcare leadership. It focuses on key areas such as AI-driven decision-making, resource optimisation, crisis management and patient care, while also addressing challenges in integrating AI technologies effectively. Findings The integration of AI in healthcare is transforming leadership from traditional, experience-based decision-making to data-driven, AI-enhanced strategies. Responsible leadership emphasises addressing ethical concerns such as bias, transparency and accountability. AI technologies improve resource allocation, crisis management and patient care, but challenges such as workforce resistance and the need for upskilling healthcare professionals remain. Practical implications Healthcare leaders must adopt a responsible leadership framework that balances AI’s potential with ethical and human-centred care principles. Recommendations include developing AI literacy programmes for healthcare professionals, ensuring inclusivity in AI algorithms and establishing governance policies that promote transparency and accountability in AI applications. Originality/value This paper provides a critical, forward-looking perspective on how responsible AI can drive a paradigm shift in healthcare leadership. It offers novel insights into the integration of AI within healthcare organisations, emphasising the need for leadership that prioritises ethical AI usage and promotes patient well-being in a rapidly evolving digital landscape.

  • Front Matter
  • Cite Count Icon 1
  • 10.1016/j.jaip.2023.04.034
Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?
  • Jul 1, 2023
  • The Journal of Allergy and Clinical Immunology: In Practice
  • Jay M Portnoy + 1 more

Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?

  • Research Article
  • Cite Count Icon 1045
  • 10.1186/s12911-020-01332-6
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
  • Nov 30, 2020
  • BMC Medical Informatics and Decision Making
  • Julia Amann + 4 more

BackgroundExplainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.MethodsTaking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.ResultsEach of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.ConclusionsTo ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.

  • Research Article
  • Cite Count Icon 2
  • 10.54254/2753-8818/21/20230845
Artificial intelligence in healthcare: Opportunities and challenges
  • Dec 20, 2023
  • Theoretical and Natural Science
  • Huimin Zhang

The development of Artificial Intelligence (AI) in healthcare has had a significant impact on healthcare. AI in healthcare can provide more accurate diagnoses and interventions for patients. AI can predict, diagnose, and treat diseases, facilitate the maximum use of healthcare resources by integrating medical information, increase efficiency, and reduce overcrowding of healthcare resources. However, the application of AI in healthcare also faces challenges such as accountability, algorithmic security, and data privacy. This paper discusses the application of AI in healthcare and explores the challenges faced by AI, in-cluding accountability traceability, algorithmic safety, data security, and ethical issues, and makes targeted recommendations. This study provides an in-depth exploration of the application of AI in healthcare, helping to improve the accuracy and efficiency of AI ap-plications in healthcare, as well as providing necessary guidance and references for opti-mizing and enhancing AI technologies.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 22
  • 10.1038/s41390-022-02053-4
Engaging children and young people on the potential role of artificial intelligence in medicine
  • Apr 7, 2022
  • Pediatric Research
  • Sheena Visram + 4 more

IntroductionThere is increasing interest in Artificial Intelligence (AI) and its application to medicine. Perceptions of AI are less well-known, notably amongst children and young people (CYP). This workshop investigates attitudes towards AI and its future applications in medicine and healthcare at a specialised paediatric hospital using practical design scenarios.MethodTwenty-one members of a Young Persons Advisory Group for research contributed to an engagement workshop to ascertain potential opportunities, apprehensions, and priorities.ResultsWhen presented as a selection of practical design scenarios, we found that CYP were more open to some applications of AI in healthcare than others. Human-centeredness, governance and trust emerged as early themes, with empathy and safety considered as important when introducing AI to healthcare. Educational workshops with practical examples using AI to help, but not replace humans were suggested to address issues, build trust, and effectively communicate about AI.ConclusionWhilst policy guidelines acknowledge the need to include children and young people to develop AI, this requires an enabling environment for human-centred AI involving children and young people with lived experiences of healthcare. Future research should focus on building consensus on enablers for an intelligent healthcare system designed for the next generation, which fundamentally, allows co-creation.ImpactChildren and young people (CYP) want to be included to share their insights about the development of research on the potential role of Artificial Intelligence (AI) in medicine and healthcare and are more open to some applications of AI than others.Whilst it is acknowledged that a research gap on involving and engaging CYP in developing AI policies exists, there is little in the way of pragmatic and practical guidance for healthcare staff on this topic.This requires research on enabling environments for ongoing digital cooperation to identify and prioritise unmet needs in the application and development of AI.

  • Book Chapter
  • 10.56461/iup_rlrc.2023.4.ch14
Artificial Intelligence in Health Care - Applications, Possible Legal Implications and Challenges of Regulation
  • Oct 1, 2023
  • Ranko Sovilj + 1 more

Recent developments in the application of artificial intelligence (AI) in health care promise to solve many of the existing global problems in improving human health care and managing global legal challenges. In addition to machine learning techniques, artificial intelligence is currently being applied in health care in other forms, such as robotic systems. However, the artificial intelligence currently used in health care is not fully autonomous, given that health care professionals make the final decision. Therefore, the most prevalent legal issues relating to the application of artificial intelligence are patient safety, impact on patient-physician relationship, physician’s responsibility, the right to privacy, data protection, intellectual property protection, lack of proper regulation, algorithmic transparency and governance of artificial intelligence empowered health care. Hence, the aim of this research is to point out the possible legal consequences and challenges of regulation and control in the application of artificial intelligence in health care. The results of this paper confirm the potential of artificial intelligence to noticeably improve patient care and advance medical research, but the shortcomings of its implementation relate to a complex legal and ethical issue that remains to be resolved. In this regard, it is necessary to achieve a broad social consensus regarding the application of artificial intelligence in health care, and adopt legal frameworks that determine the conditions for its application.

  • Research Article
  • 10.55041/ijsrem32294
IMPACT OF COVID-19 ON THE AIRLINE INDUSTRY
  • Apr 29, 2024
  • INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Priyank Kumar

The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare. The study helps by providing comprehensive inputs for framing policy on AI in healthcare industry in India. The study also highlights that if the proper actions are taken to overcome the various challenges associated with applications of AI in healthcare sector in India by the government, then the healthcare sector will immensely benefit. This article has taken an attempt to provide inputs concerning to policy initiatives, challenges, and recommendations for improving the healthcare system of India using different applications of AI The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare. The India. The study also highlights that if the proper actions are taken to overcome the various challenges associated with applications of AI in healthcare sector in India by the government, then the healthcare sector will immensely benefit. This article has taken an attempt to provide inputs concerning to policy initiatives, challenges, and recommendations for improving the healthcare system of India using different applications of AI The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare. The study helps by providing comprehensive inputs for framing policy on AI in healthcare industry in India. The study also highlights that if the proper actions are taken to overcome the various challenges associated with applications of AI in healthcare sector in India by the government, then the healthcare sector will immensely benefit. This article has taken an attempt to provide inputs concerning to policy initiatives, challenges, and recommendations for improving the healthcare system of India using different applications of A The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare.

  • Abstract
  • 10.1182/blood-2023-190943
Building Trust: Developing an Ethical Communication Framework for Navigating Artificial Intelligence Discussions and Addressing Potential Patient Concerns
  • Nov 2, 2023
  • Blood
  • Douglas William Ford + 2 more

Building Trust: Developing an Ethical Communication Framework for Navigating Artificial Intelligence Discussions and Addressing Potential Patient Concerns

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.1371/journal.pdig.0000443
Knowledge, attitudes, and perceptions of healthcare students and professionals on the use of artificial intelligence in healthcare in Pakistan.
  • May 10, 2024
  • PLOS Digital Health
  • Muhammad Mustafa Habib + 2 more

The advent of artificial intelligence (AI) technologies has emerged as a promising solution to enhance healthcare efficiency and improve patient outcomes. The objective of this study is to analyse the knowledge, attitudes, and perceptions of healthcare professionals in Pakistan about AI in healthcare. We conducted a cross-sectional study using a questionnaire distributed via Google Forms. This was distributed to healthcare professionals (e.g., doctors, nurses, medical students, and allied healthcare workers) working or studying in Pakistan. Consent was taken from all participants before initiating the questionnaire. The questions were related to participant demographics, basic understanding of AI, AI in education and practice, AI applications in healthcare systems, AI's impact on healthcare professions and the socio-ethical consequences of the use of AI. We analyzed the data using Statistical Package for Social Sciences (SPSS) statistical software, version 26.0. Overall, 616 individuals responded to the survey while n = 610 (99.0%) of respondents consented to participate. The mean age of participants was 32.2 ± 12.5 years. Most of the participants (78.7%, n = 480) had never received any formal sessions or training in AI during their studies/employment. A majority of participants, 70.3% (n = 429), believed that AI would raise more ethical challenges in healthcare. In all, 66.4% (n = 405) of participants believed that AI should be taught at the undergraduate level. The survey suggests that there is insufficient training about AI in healthcare in Pakistan despite the interest of many in this area. Future work in developing a tailored curriculum regarding AI in healthcare will help bridge the gap between the interest in use of AI and training.

  • Research Article
  • 10.25163/primeasia.319802
Transforming Healthcare with Artificial Intelligence: Innovations, Applications, and Future Challenges
  • Jan 1, 2022
  • Journal of Primeasia

Background: The integration of artificial intelligence (AI) in healthcare has significantly transformed clinical practices, offering substantial improvements in diagnosis, treatment planning, and patient outcome predictions. AI technologies, including artificial neural networks, fuzzy expert systems, and hybrid intelligent systems, are advancing the field of augmented medicine by combining AI with traditional healthcare practices. Methods: This study reviews the diverse applications of AI in healthcare, focusing on its impact on clinical procedures, disease detection, and healthcare management. The analysis covers the use of AI-driven tools such as surgical navigation systems, augmented reality for pain management, and machine learning algorithms for early disease detection and clinical documentation. Results: AI technologies like AccuVein and augmented reality headsets have enhanced clinical procedures such as intravenous placements and surgical interventions. Advances in machine learning, particularly neural networks and deep learning, have improved the detection of complex patterns in imaging data, facilitating early diagnosis of diseases like cancer and pneumonia. Natural language processing (NLP) has enhanced the analysis and classification of clinical documentation, while robotic process automation (RPA) has optimized administrative tasks. AI's role in managing infectious diseases, particularly during the COVID-19 pandemic, has been critical, demonstrating its potential in screening, diagnosis, and treatment surveillance. AI applications in oncology and laboratory medicine have also shown increased accuracy and efficiency in disease diagnosis and patient care. Conclusion: AI is revolutionizing healthcare by enhancing diagnostic accuracy, treatment efficacy, and patient care quality. Despite its transformative potential, challenges such as legal accountability and data bias must be addressed for successful integration into healthcare systems. Continued research and innovation in AI applications are essential to maximizing its benefits while minimizing associated risks.

  • Book Chapter
  • Cite Count Icon 2
  • 10.4018/979-8-3693-3731-8.ch004
Generative AI in Healthcare
  • Jun 14, 2024
  • Helen D + 1 more

In recent years, the rapid development of AI technology Generative AI, has restructured the healthcare industry. Generative AI is a collection of algorithms that uses a large volume of medical data to generate new data in various formats, including medical images, data augmentation, and medicine development. A variety of techniques are employed in Generative AI in the healthcare industry, which includes Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), AutoRegressive Models, Flow-Based Models, and Probabilistic Graphical Models. Generative AI can applied in various domains in the healthcare sector including drug discovery, medical imaging enhancement, data augmentation, anomaly detection, simulation and training, and predictive modelling. The integration of Generative AI faces some challenges, such as addressing ethical and legal issues related to the use of Artificial Intelligence (AI) in healthcare and synthetic data in clinical decision-making, and ensuring the reliability and interpretability of AI-generated outputs.

More from: Journal of Multidisciplinary Healthcare
  • New
  • Research Article
  • 10.2147/jmdh.s550005
The Skin: A Critical Window into Chronic Kidney Disease and a Call for Collaborative Care
  • Dec 1, 2025
  • Journal of Multidisciplinary Healthcare
  • Nomakhosi Mpofana + 1 more

  • New
  • Research Article
  • 10.2147/jmdh.s546721
Assessing Breastfeeding and Family-Centered Care: A Delphi-Based Scale and Its Prediction of Child Psychological Status
  • Dec 1, 2025
  • Journal of Multidisciplinary Healthcare
  • Yafei Yang + 6 more

  • New
  • Supplementary Content
  • 10.2147/jmdh.s560139
Risk Factors, Diagnostic Challenges, and Emerging Therapeutic Strategies for ICU-Acquired Weakness: A Brief Review
  • Nov 28, 2025
  • Journal of Multidisciplinary Healthcare
  • Xiaojie Zhang + 7 more

  • New
  • Research Article
  • 10.2147/jmdh.s572439
Critical Role of Family Support in the Linguistic Development of Children with Cochlear Implants
  • Nov 28, 2025
  • Journal of Multidisciplinary Healthcare
  • Faisl Alqraini

  • New
  • Supplementary Content
  • 10.2147/jmdh.s559188
A SWOT Analysis of Death Literacy Education in Nursing: Implications for Hospice and Palliative Care in China
  • Nov 27, 2025
  • Journal of Multidisciplinary Healthcare
  • Anyun Wang + 3 more

  • New
  • Research Article
  • 10.2147/jmdh.s538723
Artificial Intelligence Adoption in Surgery: Cognition, Usage Patterns and Implementation Barriers of DeepSeek Among Healthcare Professionals in China’s Tertiary Hospitals
  • Nov 26, 2025
  • Journal of Multidisciplinary Healthcare
  • Hua Xie + 6 more

  • New
  • Supplementary Content
  • 10.2147/jmdh.s566098
Multidisciplinary Team (MDT)-Based Approaches for Liver Cancer Treatment: A Discussion Paper on Tumor Boards and Beyond
  • Nov 26, 2025
  • Journal of Multidisciplinary Healthcare
  • Mengdi Qi + 1 more

  • New
  • Supplementary Content
  • 10.2147/jmdh.s567206
Acupuncture for Post-Stroke Lower Limb Dysfunction: Clinical Efficacy and Neurophysiological Mechanisms
  • Nov 25, 2025
  • Journal of Multidisciplinary Healthcare
  • Wei Xie + 4 more

  • New
  • Research Article
  • 10.2147/jmdh.s538924
The Effects of Mindfulness Meditation on Core Attention-Deficit Hyperactivity Disorder Symptoms, Family Functioning and Social Functioning in Children Aged Six to Nine
  • Nov 25, 2025
  • Journal of Multidisciplinary Healthcare
  • Lifang Wang + 8 more

  • New
  • Research Article
  • 10.2147/jmdh.s553601
Understanding the Mismatch Between Utilization and Demand for Home Medical Care Among Disabled Older Adults in China
  • Nov 24, 2025
  • Journal of Multidisciplinary Healthcare
  • Jinxuan Zheng + 3 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon