Ethical and Legal Governance of Generative AI in Chinese Healthcare
The application of generative artificial intelligence (AI) technology in the healthcare sector can significantly enhance the efficiency of China’s healthcare services. However, risks persist in terms of accuracy, transparency, data privacy, ethics, and bias. These risks are manifested in three key areas: first, the potential erosion of human agency; second, issues of fairness and justice; and third, questions of liability and responsibility. This study reviews and analyzes the legal and regulatory frameworks established in China for the application of generative AI in healthcare, as well as relevant academic literature. Our research findings indicate that while China is actively constructing an ethical and legal governance framework in this field, the regulatory system remains inadequate and faces numerous challenges. These challenges include lagging regulatory rules; an unclear legal status of AI in laws such as the Civil Code; immature standards and regulatory schemes for medical AI training data; and the lack of a coordinated regulatory mechanism among different government departments. In response, this study attempts to establish a governance framework for generative AI in the medical field in China from both legal and ethical perspectives, yielding relevant research findings. Given the latest developments in generative AI in China, it is necessary to address the challenges of its application in the medical field from both ethical and legal perspectives. This includes enhancing algorithm transparency, standardizing medical data management, and promoting AI legislation. As AI technology continues to evolve, more diverse technical models will emerge in the future. This study also proposes that to address potential risks associated with medical AI, efforts should be made to establish a global AI ethics review committee to promote the formation of internationally unified ethical and legal review mechanisms.
228
- 10.1002/mds.27376
- Apr 27, 2018
- Movement Disorders
51
- 10.2196/40031
- May 23, 2023
- Journal of medical Internet research
3
- 10.1136/jme-2023-109737
- Sep 12, 2024
- Journal of Medical Ethics
3541
- 10.1126/science.aax2342
- Oct 24, 2019
- Science
665
- 10.1136/jnnp.2006.103788
- Nov 10, 2006
- Journal of Neurology, Neurosurgery & Psychiatry
7
- 10.1007/s11948-024-00486-0
- Jan 1, 2024
- Science and Engineering Ethics
1
- 10.1108/jhom-01-2025-0007
- Apr 1, 2025
- Journal of Health Organization and Management
179
- 10.1136/medethics-2018-105118
- Feb 22, 2019
- Journal of Medical Ethics
17
- 10.4103/singaporemedj.smj-2023-279
- Mar 1, 2024
- Singapore medical journal
80
- 10.3390/healthcare12050562
- Feb 28, 2024
- Healthcare
- Research Article
25
- 10.3389/fgene.2022.902542
- Aug 15, 2022
- Frontiers in genetics
Introduction: “Democratizing” artificial intelligence (AI) in medicine and healthcare is a vague term that encompasses various meanings, issues, and visions. This article maps the ways this term is used in discourses on AI in medicine and healthcare and uses this map for a normative reflection on how to direct AI in medicine and healthcare towards desirable futures. Methods: We searched peer-reviewed articles from Scopus, Google Scholar, and PubMed along with grey literature using search terms “democrat*”, “artificial intelligence” and “machine learning”. We approached both as documents and analyzed them qualitatively, asking: What is the object of democratization? What should be democratized, and why? Who is the demos who is said to benefit from democratization? And what kind of theories of democracy are (tacitly) tied to specific uses of the term? Results: We identified four clusters of visions of democratizing AI in healthcare and medicine: 1) democratizing medicine and healthcare through AI, 2) multiplying the producers and users of AI, 3) enabling access to and oversight of data, and 4) making AI an object of democratic governance. Discussion: The envisioned democratization in most visions mainly focuses on patients as consumers and relies on or limits itself to free market-solutions. Democratization in this context requires defining and envisioning a set of social goods, and deliberative processes and modes of participation to ensure that those affected by AI in healthcare have a say on its development and use.
- Research Article
- 10.2196/71236
- Jun 2, 2025
- Journal of Medical Internet Research
Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.
- Preprint Article
- 10.2196/preprints.71236
- Jan 13, 2025
UNSTRUCTURED Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.
- Research Article
1
- 10.1016/j.artmed.2025.103169
- Sep 1, 2025
- Artificial intelligence in medicine
From black box to clarity: Strategies for effective AI informed consent in healthcare.
- Research Article
- 10.1108/lhs-01-2025-0018
- Sep 9, 2025
- Leadership in Health Services
Purpose This paper aims to explore the paradigm shift in leadership and strategic management driven by the integration of responsible artificial intelligence (AI) in healthcare. It explores the evolving role of leadership in adapting to AI technologies while ensuring ethical governance, transparency and accountability in healthcare decision-making. Design/methodology/approach This study conducts a comprehensive review of current literature, case studies and industry reports to evaluate the implications of responsible AI adoption in healthcare leadership. It focuses on key areas such as AI-driven decision-making, resource optimisation, crisis management and patient care, while also addressing challenges in integrating AI technologies effectively. Findings The integration of AI in healthcare is transforming leadership from traditional, experience-based decision-making to data-driven, AI-enhanced strategies. Responsible leadership emphasises addressing ethical concerns such as bias, transparency and accountability. AI technologies improve resource allocation, crisis management and patient care, but challenges such as workforce resistance and the need for upskilling healthcare professionals remain. Practical implications Healthcare leaders must adopt a responsible leadership framework that balances AI’s potential with ethical and human-centred care principles. Recommendations include developing AI literacy programmes for healthcare professionals, ensuring inclusivity in AI algorithms and establishing governance policies that promote transparency and accountability in AI applications. Originality/value This paper provides a critical, forward-looking perspective on how responsible AI can drive a paradigm shift in healthcare leadership. It offers novel insights into the integration of AI within healthcare organisations, emphasising the need for leadership that prioritises ethical AI usage and promotes patient well-being in a rapidly evolving digital landscape.
- Front Matter
1
- 10.1016/j.jaip.2023.04.034
- Jul 1, 2023
- The Journal of Allergy and Clinical Immunology: In Practice
Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?
- Research Article
1045
- 10.1186/s12911-020-01332-6
- Nov 30, 2020
- BMC Medical Informatics and Decision Making
BackgroundExplainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.MethodsTaking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.ResultsEach of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.ConclusionsTo ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
- Research Article
2
- 10.54254/2753-8818/21/20230845
- Dec 20, 2023
- Theoretical and Natural Science
The development of Artificial Intelligence (AI) in healthcare has had a significant impact on healthcare. AI in healthcare can provide more accurate diagnoses and interventions for patients. AI can predict, diagnose, and treat diseases, facilitate the maximum use of healthcare resources by integrating medical information, increase efficiency, and reduce overcrowding of healthcare resources. However, the application of AI in healthcare also faces challenges such as accountability, algorithmic security, and data privacy. This paper discusses the application of AI in healthcare and explores the challenges faced by AI, in-cluding accountability traceability, algorithmic safety, data security, and ethical issues, and makes targeted recommendations. This study provides an in-depth exploration of the application of AI in healthcare, helping to improve the accuracy and efficiency of AI ap-plications in healthcare, as well as providing necessary guidance and references for opti-mizing and enhancing AI technologies.
- Research Article
22
- 10.1038/s41390-022-02053-4
- Apr 7, 2022
- Pediatric Research
IntroductionThere is increasing interest in Artificial Intelligence (AI) and its application to medicine. Perceptions of AI are less well-known, notably amongst children and young people (CYP). This workshop investigates attitudes towards AI and its future applications in medicine and healthcare at a specialised paediatric hospital using practical design scenarios.MethodTwenty-one members of a Young Persons Advisory Group for research contributed to an engagement workshop to ascertain potential opportunities, apprehensions, and priorities.ResultsWhen presented as a selection of practical design scenarios, we found that CYP were more open to some applications of AI in healthcare than others. Human-centeredness, governance and trust emerged as early themes, with empathy and safety considered as important when introducing AI to healthcare. Educational workshops with practical examples using AI to help, but not replace humans were suggested to address issues, build trust, and effectively communicate about AI.ConclusionWhilst policy guidelines acknowledge the need to include children and young people to develop AI, this requires an enabling environment for human-centred AI involving children and young people with lived experiences of healthcare. Future research should focus on building consensus on enablers for an intelligent healthcare system designed for the next generation, which fundamentally, allows co-creation.ImpactChildren and young people (CYP) want to be included to share their insights about the development of research on the potential role of Artificial Intelligence (AI) in medicine and healthcare and are more open to some applications of AI than others.Whilst it is acknowledged that a research gap on involving and engaging CYP in developing AI policies exists, there is little in the way of pragmatic and practical guidance for healthcare staff on this topic.This requires research on enabling environments for ongoing digital cooperation to identify and prioritise unmet needs in the application and development of AI.
- Book Chapter
- 10.56461/iup_rlrc.2023.4.ch14
- Oct 1, 2023
Recent developments in the application of artificial intelligence (AI) in health care promise to solve many of the existing global problems in improving human health care and managing global legal challenges. In addition to machine learning techniques, artificial intelligence is currently being applied in health care in other forms, such as robotic systems. However, the artificial intelligence currently used in health care is not fully autonomous, given that health care professionals make the final decision. Therefore, the most prevalent legal issues relating to the application of artificial intelligence are patient safety, impact on patient-physician relationship, physician’s responsibility, the right to privacy, data protection, intellectual property protection, lack of proper regulation, algorithmic transparency and governance of artificial intelligence empowered health care. Hence, the aim of this research is to point out the possible legal consequences and challenges of regulation and control in the application of artificial intelligence in health care. The results of this paper confirm the potential of artificial intelligence to noticeably improve patient care and advance medical research, but the shortcomings of its implementation relate to a complex legal and ethical issue that remains to be resolved. In this regard, it is necessary to achieve a broad social consensus regarding the application of artificial intelligence in health care, and adopt legal frameworks that determine the conditions for its application.
- Research Article
- 10.55041/ijsrem32294
- Apr 29, 2024
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare. The study helps by providing comprehensive inputs for framing policy on AI in healthcare industry in India. The study also highlights that if the proper actions are taken to overcome the various challenges associated with applications of AI in healthcare sector in India by the government, then the healthcare sector will immensely benefit. This article has taken an attempt to provide inputs concerning to policy initiatives, challenges, and recommendations for improving the healthcare system of India using different applications of AI The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare. The India. The study also highlights that if the proper actions are taken to overcome the various challenges associated with applications of AI in healthcare sector in India by the government, then the healthcare sector will immensely benefit. This article has taken an attempt to provide inputs concerning to policy initiatives, challenges, and recommendations for improving the healthcare system of India using different applications of AI The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare. The study helps by providing comprehensive inputs for framing policy on AI in healthcare industry in India. The study also highlights that if the proper actions are taken to overcome the various challenges associated with applications of AI in healthcare sector in India by the government, then the healthcare sector will immensely benefit. This article has taken an attempt to provide inputs concerning to policy initiatives, challenges, and recommendations for improving the healthcare system of India using different applications of A The purpose of the paper is to provide an overview of the issues related to artificial intelligence (AI) applications in the Indian healthcare sector and provide input to policymakers. A qualitative approach has been used in this study to identify government initiatives, opportunities, and challenges for applications of AI and suggest improvements in policy areas relevant to AI in healthcare.
- Abstract
- 10.1182/blood-2023-190943
- Nov 2, 2023
- Blood
Building Trust: Developing an Ethical Communication Framework for Navigating Artificial Intelligence Discussions and Addressing Potential Patient Concerns
- Research Article
7
- 10.1371/journal.pdig.0000443
- May 10, 2024
- PLOS Digital Health
The advent of artificial intelligence (AI) technologies has emerged as a promising solution to enhance healthcare efficiency and improve patient outcomes. The objective of this study is to analyse the knowledge, attitudes, and perceptions of healthcare professionals in Pakistan about AI in healthcare. We conducted a cross-sectional study using a questionnaire distributed via Google Forms. This was distributed to healthcare professionals (e.g., doctors, nurses, medical students, and allied healthcare workers) working or studying in Pakistan. Consent was taken from all participants before initiating the questionnaire. The questions were related to participant demographics, basic understanding of AI, AI in education and practice, AI applications in healthcare systems, AI's impact on healthcare professions and the socio-ethical consequences of the use of AI. We analyzed the data using Statistical Package for Social Sciences (SPSS) statistical software, version 26.0. Overall, 616 individuals responded to the survey while n = 610 (99.0%) of respondents consented to participate. The mean age of participants was 32.2 ± 12.5 years. Most of the participants (78.7%, n = 480) had never received any formal sessions or training in AI during their studies/employment. A majority of participants, 70.3% (n = 429), believed that AI would raise more ethical challenges in healthcare. In all, 66.4% (n = 405) of participants believed that AI should be taught at the undergraduate level. The survey suggests that there is insufficient training about AI in healthcare in Pakistan despite the interest of many in this area. Future work in developing a tailored curriculum regarding AI in healthcare will help bridge the gap between the interest in use of AI and training.
- Research Article
- 10.25163/primeasia.319802
- Jan 1, 2022
- Journal of Primeasia
Background: The integration of artificial intelligence (AI) in healthcare has significantly transformed clinical practices, offering substantial improvements in diagnosis, treatment planning, and patient outcome predictions. AI technologies, including artificial neural networks, fuzzy expert systems, and hybrid intelligent systems, are advancing the field of augmented medicine by combining AI with traditional healthcare practices. Methods: This study reviews the diverse applications of AI in healthcare, focusing on its impact on clinical procedures, disease detection, and healthcare management. The analysis covers the use of AI-driven tools such as surgical navigation systems, augmented reality for pain management, and machine learning algorithms for early disease detection and clinical documentation. Results: AI technologies like AccuVein and augmented reality headsets have enhanced clinical procedures such as intravenous placements and surgical interventions. Advances in machine learning, particularly neural networks and deep learning, have improved the detection of complex patterns in imaging data, facilitating early diagnosis of diseases like cancer and pneumonia. Natural language processing (NLP) has enhanced the analysis and classification of clinical documentation, while robotic process automation (RPA) has optimized administrative tasks. AI's role in managing infectious diseases, particularly during the COVID-19 pandemic, has been critical, demonstrating its potential in screening, diagnosis, and treatment surveillance. AI applications in oncology and laboratory medicine have also shown increased accuracy and efficiency in disease diagnosis and patient care. Conclusion: AI is revolutionizing healthcare by enhancing diagnostic accuracy, treatment efficacy, and patient care quality. Despite its transformative potential, challenges such as legal accountability and data bias must be addressed for successful integration into healthcare systems. Continued research and innovation in AI applications are essential to maximizing its benefits while minimizing associated risks.
- Book Chapter
2
- 10.4018/979-8-3693-3731-8.ch004
- Jun 14, 2024
In recent years, the rapid development of AI technology Generative AI, has restructured the healthcare industry. Generative AI is a collection of algorithms that uses a large volume of medical data to generate new data in various formats, including medical images, data augmentation, and medicine development. A variety of techniques are employed in Generative AI in the healthcare industry, which includes Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), AutoRegressive Models, Flow-Based Models, and Probabilistic Graphical Models. Generative AI can applied in various domains in the healthcare sector including drug discovery, medical imaging enhancement, data augmentation, anomaly detection, simulation and training, and predictive modelling. The integration of Generative AI faces some challenges, such as addressing ethical and legal issues related to the use of Artificial Intelligence (AI) in healthcare and synthetic data in clinical decision-making, and ensuring the reliability and interpretability of AI-generated outputs.
- New
- Research Article
- 10.2147/jmdh.s550005
- Dec 1, 2025
- Journal of Multidisciplinary Healthcare
- New
- Research Article
- 10.2147/jmdh.s546721
- Dec 1, 2025
- Journal of Multidisciplinary Healthcare
- New
- Supplementary Content
- 10.2147/jmdh.s560139
- Nov 28, 2025
- Journal of Multidisciplinary Healthcare
- New
- Research Article
- 10.2147/jmdh.s572439
- Nov 28, 2025
- Journal of Multidisciplinary Healthcare
- New
- Supplementary Content
- 10.2147/jmdh.s559188
- Nov 27, 2025
- Journal of Multidisciplinary Healthcare
- New
- Research Article
- 10.2147/jmdh.s538723
- Nov 26, 2025
- Journal of Multidisciplinary Healthcare
- New
- Supplementary Content
- 10.2147/jmdh.s566098
- Nov 26, 2025
- Journal of Multidisciplinary Healthcare
- New
- Supplementary Content
- 10.2147/jmdh.s567206
- Nov 25, 2025
- Journal of Multidisciplinary Healthcare
- New
- Research Article
- 10.2147/jmdh.s538924
- Nov 25, 2025
- Journal of Multidisciplinary Healthcare
- New
- Research Article
- 10.2147/jmdh.s553601
- Nov 24, 2025
- Journal of Multidisciplinary Healthcare
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.