Artificial Intelligence and Law: Procedural Safeguards and Regulatory Challenges in Kazakhstan

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Background: The active integration of artificial intelligence (AI) into diverse spheres of human activity has created significant opportunities for innovation and efficiency, while simultaneously raising complex ethical, legal, and social challenges. Among these, the deployment of high-risk AI systems requires particular scrutiny due to their potential impact on fundamental rights, public safety, and socio-economic relations. This research examines both the benefits and risks of AI technologies, with an emphasis on the need to establish clear legal and regulatory frameworks at the national and international levels. Methods: The study employs a comparative legal analysis of existing regulatory approaches, including the European Union’s AI Act (EU AI Act), the OECD AI Principles, and national legislative practices. The methodology is based on a systematic review of normative legal acts, doctrinal sources, and policy papers, as well as an evaluation of prospective risks associated with the use of high-risk AI systems in various sectors, including transport, healthcare, and financial services. Results and conclusions: The analysis reveals that, while the adoption of AI contributes to economic development, efficiency in public administration, and improved quality of services, it also generates risks such as discrimination, violations of privacy, cyberthreats, and reduced accountability. In particular, the study highlights that existing legislation in Kazakhstan, as in many other jurisdictions, does not sufficiently address the specificities of high-risk AI systems. Comparative legal analysis demonstrates that the most effective regulatory models are risk-oriented, ensuring transparency, human oversight, and liability mechanisms. The findings suggest that partial amendments to existing legislation—such as in the areas of mandatory insurance and consumer protection—could serve as an interim measure, while the adoption of a dedicated AI law may be necessary in the long term. The study underscores the need for a balanced legal framework that harmonises technological innovation with the protection of human rights and societal interests. It is argued that Kazakhstan, while considering international best practices, should pursue a two-stage approach: (1) introducing targeted amendments to sectoral legislation; and (2) elaborating a comprehensive AI law focused on high-risk systems. Such a framework would mitigate risks, ensure accountability, and foster public trust, while promoting the responsible and sustainable use of artificial intelligence.

Similar Papers
  • Research Article
  • 10.32755/sjlaw.2025.03.058
ПРАВОВІ АСПЕКТИ ВИКОРИСТАННЯ ШТУЧНОГО ІНТЕЛЕКТУ У СФЕРІ КІБЕРБЕЗПЕКИ: РЕГУЛЮВАННЯ ТА ЕТИЧНІ ДИЛЕМИ
  • Jul 8, 2025
  • Scientific Herald of Sivershchyna. Series: Law
  • V Puzyrnyi + 1 more

The article is devoted to the analysis of legal aspects of artificial intelligence (AI) implementation in the field of cybersecurity. In the context of rapid technological development, AI has become an integral tool in identifying and countering cyber threats through anomaly detection systems, automated firewalls, machine learning–based log analysis, threat intelligence, and incident response automation. However, the deployment of such technologies reveals a number of unresolved legal and ethical dilemmas that require urgent regulatory attention. The research identifies three core groups of legal issues: determination of liability, challenges of system autonomy, and the transparency of AI decision-making. The absence of a legal framework for assigning responsibility in cases where AI malfunctions or acts autonomously complicates the process of legal evaluation. The article emphasizes that Ukrainian legislation lacks a definition of “autonomous agent” or “electronic subject,” making it difficult to apply traditional norms of civil and administrative law. The problem of autonomy is particularly relevant, as modern AI systems can modify their behavior independently through machine learning processes. This raises questions about human oversight and the balance between innovation and regulatory control. Additionally, many AI systems function as “black boxes,” making it difficult – even for developers – to understand the logic behind certain decisions. This threatens the protection of human rights, including the right to information and due process. To address these challenges, the article proposes several legal and policy measures, such as the introduction of a dedicated law on AI, adoption of explainable AI principles, implementation of algorithmic audit standards, and development of ethical guidelines for AI developers and users. Keywords: artificial intelligence, cybersecurity, information law, legal responsibility, autonomous systems, artificial intelligence ethics, algorithmic transparency, legal regulation.

  • Research Article
  • 10.24144/2788-6018.2025.06.2.64
Use of artificial intelligence in public administration: challenges and prospects
  • Dec 15, 2025
  • Analytical and Comparative Jurisprudence
  • O I Musii

The rapid advancement of artificial intelligence (AI) technologies has profoundly transformed virtually all areas of human activity, and public administration is no exception. The integration of AI into public management systems opens up unprecedented opportunities to increase efficiency, transparency, and responsiveness in the provision of public services. This article examines the potential, challenges, and prospects of using artificial intelligence in public administration, focusing on ethical, legal, and organizational aspects. It emphasizes that AI tools—from data analytics and machine learning to natural language processing and automated decision-making systems—can significantly contribute to improving the quality of political planning, resource allocation, and citizen participation. Through predictive modeling, intelligent data processing, and real-time monitoring, AI enables evidence-based decision-making and strengthens the adaptability of public institutions to a dynamic socioeconomic environment. At the same time, the implementation of AI in public administration presents several complex challenges. Key issues include algorithmic bias, data protection violations, a lack of transparency regarding automated decisions, and insufficient digital literacy among public sector employees. The article emphasizes that without a clear ethical framework and appropriate regulatory mechanisms, the use of AI could exacerbate social inequalities and undermine citizens’ trust in government institutions. Therefore, the development of clear governance standards for the use of AI in the public sector is essential to ensure accountability, fairness, and human oversight in all decision-making processes. The study also analyzes international experiences in AI governance, particularly in the European Union, the United States, and leading Asian countries, and identifies best practices that can be transferred to national contexts. It argues that the strategic implementation of AI should be based on the principles of open government, inclusivity, and human-centered digital transformation. The article also emphasizes the need for continuous training and development measures for public sector employees to ensure competent use of AI-based tools and the interpretation of algorithmic results. In conclusion, it is concluded that artificial intelligence represents both a challenge and an opportunity for modern public administration. It can optimize administrative processes and strengthen democratic governance, but at the same time requires new legal safeguards, ethical standards, and institutional competencies. The success of integrating AI into public administration depends crucially on the balance between technological innovation and the protection of human rights, transparency, and accountability. The future of AI in the public sector therefore lies not solely in technological progress, but in the development of responsible and sustainable political strategies that align innovation with the public interest and democratic values.

  • Front Matter
  • 10.1186/s44158-025-00278-3
AI policy in healthcare: a checklist-based methodology for structured implementation.
  • Sep 25, 2025
  • Journal of anesthesia, analgesia and critical care
  • Elena Bignami + 4 more

Artificial Intelligence (AI) is transforming anaesthesia and intensive care medicine, enhancing diagnostic precision, workflow efficiency, and patient safety. However, deploying AI in high-acuity environments involves regulatory, ethical, and operational challenges. The European Union Artificial Intelligence Act (AI Act), effective 2025, imposes binding obligations on healthcare organizations, creating an urgent need for structured, governance-focused AI policies. This work presents a checklist-based methodology for responsible, safe, ethical, and regulation-aligned AI adoption in clinical units. Effective AI policies must ensure transparency, safety, fairness, and regulatory compliance while remaining adaptable to rapid technological and legislative changes. The proposed methodology employs a domain-specific checklist to generate critical evaluative questions, enabling healthcare professionals to systematically assess AI systems' appropriateness, reliability, and legal implications without relying on rigid, quickly outdated prescriptive rules. Regulation (EU) 2024/1689 establishes the first comprehensive AI legal framework, introducing risk-based classification, imposing stringent requirements for high-risk AI, often including medical devices. Compliance obligations extend to both AI-system providers and deployers, making operational compliance instruments and AI literacy programmes essential for lawful implementation. OBLIGATION AND PLANNING: From February 2025, the AI Act mandates AI literacy for all personnel interacting with AI-systems. Training should cover baseline competencies for all staff, advanced modules for specialists, continuous professional development, and integration of ethical, legal, and governance principles. Competency acquisition and updates must be systematically documented to meet institutional and EU compliance standards. The checklist has two integrated domains: clinical and technical validation, including evidence-based performance assessment, real-world validation, MDR compliance, GDPR adherence, and post-deployment monitoring; and governance and compliance, covering AI Act conformity, organizational accountability, decision traceability, human oversight, AI literacy, and structured audit and update mechanisms. The checklist methodology offers a scalable, adaptable, regulation-ready framework for AI policy development. By embedding legal compliance, clinical safety, governance, and continuous staff training, it supports sustainable AI integration. Future updates will incorporate regulatory changes, real-world feedback, and impact metrics, enhancing AI's contribution to quality, safety, and equity in patient care.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • 10.1177/0040571x251355920
The fish is only as big as the pond it swims in: theological perspectives on post-scaling AI
  • Jul 1, 2025
  • Theology
  • Enrico Beltramini

This article examines the far-reaching implications of the Chinchilla Paper and related insights for artificial intelligence (AI) scalability, existential risk narratives, and theological reflections on AI. The Chinchilla Paper, a landmark study by DeepMind, disrupts traditional assumptions about computational power as the key to AI advancement, revealing instead that AI’s potential is fundamentally constrained by the finite availability of high-quality, human-generated training data. This insight reframes discussions on AI scalability, casting doubt on existential risk narratives that envision artificial general intelligence (AGI) as an uncontrollable force capable of catastrophic outcomes for humanity. This article underscores AI’s reliance on human-generated inputs and its inherent limitations, tempering apocalyptic fears surrounding its potential. It critiques exaggerated theological narratives that portray AI as either a catastrophic existential threat or a utopian agent of human transformation. Instead, it proposes reframing AI as a practical tool. This approach emphasizes its constraints and dependency on human oversight, promoting a balanced and pragmatic perspective on AI within both theological discussions and broader societal debates.

  • Front Matter
  • Cite Count Icon 2
  • 10.1016/j.jaip.2023.04.034
Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?
  • Jul 1, 2023
  • The Journal of Allergy and Clinical Immunology: In Practice
  • Jay M Portnoy + 1 more

Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?

  • Research Article
  • 10.1017/err.2025.26
The Use of Facial Recognition Technologies in the Context of Peaceful Protest: The Risk of Mass Surveillance Practices and the Implications for the Protection of Human Rights
  • May 15, 2025
  • European Journal of Risk Regulation
  • Giulia Gabrielli

The increasing use of Artificial Intelligence (AI)-based surveillance technologies such as facial recognition for national and public security purposes in the area of law enforcement raises serious concerns regarding the potential risks of abuse and arbitrariness it might entail, in the absence of adequate safeguards. At an international level, the impact of biometric identification systems on the protection and promotion of human rights and fundamental freedoms has been consistently emphasised by international organisations, human rights monitoring mechanisms and the civil society, particularly with regards to the risk of mass surveillance possibly resulting in the infringement upon the right of privacy and freedom of assembly. This contribution will assess the international human rights and standards applicable to the use of these technologies for national security purposes especially in the context of peaceful protest by assessing the position of the European Court of Human Rights in Glukhin v Russia (11519/20) and recent regulatory attempts.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.jacr.2021.06.025
Real-World Surveillance of FDA-Cleared Artificial Intelligence Models: Rationale and Logistics.
  • Feb 1, 2022
  • Journal of the American College of Radiology
  • Keith J Dreyer + 2 more

Real-World Surveillance of FDA-Cleared Artificial Intelligence Models: Rationale and Logistics.

  • Research Article
  • 10.31002/rep.v10i1.2596
PUBLIC POLICY WITH AI: THE ROLE OF ARTIFICIAL INTELLIGENCE IN THE POLICY PROCESS
  • Jun 23, 2025
  • Jurnal REP (Riset Ekonomi Pembangunan)
  • Hanief Arief + 5 more

This research examines the role of artificial intelligence (AI) in public policy through a literature study approach. The background of the research is based on the rapid development of digitalization and the increasing need for innovation in governance. AI, as a data analysis and prediction tool, has the potential to improve the quality of decision-making and the efficiency of public administration. This study identifies various AI implementations in the health, transportation, and public service sectors in several countries, highlighting benefits such as improved service effectiveness and operational optimization. However, AI implementation also poses ethical challenges such as algorithmic bias, privacy issues, and lack of transparency in the decision-making process. The discussion in this study addresses policy recommendations to address these challenges as well as a comparison of AI adoption in developed and developing countries, providing insights into best practices and anticipated barriers. The research concludes that responsible AI adoption, supported by comprehensive regulation and capacity building of the apparatus, can significantly transform public policy.

  • Research Article
  • 10.31651/2524-2660-2025-3-127-136
ПРОБЛЕМИ НЕДОСТОВІРНОСТІ ДАНИХ ПРИ ВИКОРИСТАННІ ШТУЧНОГО ІНТЕЛЕКТУ В ОСВІТНІЙ ДІЯЛЬНОСТІ
  • Jan 1, 2025
  • Cherkasy University Bulletin: Pedagogical Sciences
  • Serhii Melnyk

Problem (Introduction). The rapid integration of generative artificial intelligence (AI) into education has created unprecedented opportunities for personalised learning, yet it has also raised serious concerns about the reliability of AI-generated content. Large language models (LLMs) optimise for plausibility rather than truth, and they can fabricate facts, citations or even legal cases (When AI Gets It Wrong: Addressing AI Hallucinations and Bias - MIT Sloan Teaching & Learning Technologies, n.d.). Such hallucinations threaten academic integrity: students may unknowingly absorb falsehoods, while teachers could inadvertently reproduce inaccuracies in course materials. Empirical studies reveal that AI systems can hallucinate from less than 1 % up to 15–40 % of cases in educational tasks depending on the model and domain (Figure 1), and systematic reviews note that over‑reliance on AI dialogue systems is linked to diminished critical thinking, increased technology dependence and the spread of misinformation (Zhai et al., 2024). Purpose. This article aims to analyse the scope and causes of AI-generated misinformation in education and to develop evidence-based recommendations for mitigating these risks. It combines technical insights on model architecture and training data with pedagogical strategies to foster AI literacy. The goal is to ensure that AI enhances rather than undermines learning. Methods. A systematic literature review of over 80 sources, including scientific articles, policy documents (AI Act, UNESCO guidelines), and empirical studies, provided a theoretical foundation. Comparative analysis of hallucination rates across models (Makhno et al., 2025; Lelièvre et al., 2025) informed the quantitative assessment. The study also modelled mitigation strategies such as Retrieval‑Augmented Generation (RAG) and Chain‑of‑Verification and evaluated pedagogical interventions like lateral reading and AI literacy programmes. Results. The findings show that hallucinations stem from both internal (model architecture and context limitations) and external (biased or incomplete training data) factors. Even top models misinform 1–3% of the time, whereas widely used free systems can err 15–40% of the time when generating bibliographies or research proposals (Balch & Blanck, 2024). Hallucinations manifest in various forms: logical errors, mathematical mistakes, fabricated sources and factual inaccuracies. Their educational consequences include decreased critical thinking, increased plagiarism (“AI‑giarism”) and a risk of spreading disinformation. Regulatory frameworks classify educational AI systems as high‑risk; Annex III of the EU AI Act lists educational AI systems for admissions, assessment and monitoring as high‑risk and sets obligations for accuracy, transparency and human oversight (Nguyen, 2025). Among mitigation strategies, RAG reduces hallucinations, while Chain‑of‑Verification and self‑consistency improve reliability. Pedagogically, teaching students lateral reading, updating academic policies, and redesigning assessments to require reflection and verification are essential. Originality. Unlike purely technical surveys or broad commentaries, this study bridges AI research with educational practice and policy, providing a holistic perspective tailored to Ukrainian higher education. It synthesises international findings with local realities, offers a taxonomy of AI errors, presents original visualisations (Table 1 and Figure 1), and proposes a multi-level framework combining technical, pedagogical and regulatory solutions. The article emphasises that AI hallucinations are not simply technical bugs but systemic challenges requiring cultural change. Conclusion. To harness AI’s benefits in education, stakeholders must recognise and mitigate the problem of misinformation. Improving models (via RAG, verification chains), enhancing AI literacy, and adhering to high-risk regulatory standards will help ensure that AI supports, rather than sabotages, learning. Future research should focus on domain-specific hallucination rates, real-time fact-checkers for Ukrainian-language content, and longitudinal studies on AI’s cognitive impact. Ultimately, balancing technological innovation with human oversight and ethical principles will determine whether AI becomes a trustworthy educational ally or a source of confusion.

  • Research Article
  • 10.17803/1729-5920.2024.213.8.043-053
Sovereignty as the Foundation for Ensuring Constitutional Human Rights and Freedoms
  • Aug 16, 2024
  • Lex Russica
  • N K Atabekova

The paper is devoted to the consideration of the principle of sovereignty in the context of ensuring human rights and freedoms. It analyzes the norms and provisions of the Constitution of the Kyrgyz Republic that enshrine the institution of human and civil rights and freedoms and guarantees of their implementation, examines the mechanism for the implementation and protection of human rights and the role of sovereignty in ensuring their implementation and protection. With the help of formal legal, structural-functional and comparative legal analysis, the author determine the causes of conflicts and contradictions in the legislative system that complicate constitutional and legal regulation in the field of human rights and freedoms. As a result of the research, the author came to the conclusion about the exceptional importance of sovereignty in ensuring implementation and protection of human and civil rights and their inverse correlation. The author explains the constitutional novelties guaranteeing the protection of human rights and freedoms in the Kyrgyz Republic; considers some effective mechanisms for the protection of human rights and freedoms. At the same time, the author highlights some errors in the reflection of certain elements of the legal status of an individual in the constitutional matter, which can determine the appearance of contradictions and conflicts in legal regulation and, thereby, lead to imperfection of the mechanism for the exercise of rights and freedoms, as well as human responsibilities. The author justified the need for the State to rely on constitutional values, as well as to ensure the supreme legal force of the Constitution due to the insufficient effectiveness of international law in ensuring human rights and freedoms and, at the same time, the expediency of explicitly reflecting in the Basic Law the ratio of international and national legislation.

  • Research Article
  • 10.3390/laws14060098
Integration of Artificial Intelligence into Criminal Procedure Law and Practice in Kazakhstan
  • Dec 12, 2025
  • Laws
  • Gulzhan Nusupzhanovna Mukhamadieva + 3 more

Legal regulation and practical implementation of artificial intelligence (AI) in Kazakhstan’s criminal procedure are considered within the context of judicial digital transformation. Risks arise for fundamental procedural principles, including the presumption of innocence, adversarial process, and protection of individual rights and freedoms. Legislative mechanisms ensuring lawful and rights-based application of AI in criminal proceedings are required to maintain procedural balance. Comparative legal analysis, formal legal research, and a systemic approach reveal gaps in existing legislation: absence of clear definitions, insufficient regulation, and lack of accountability for AI use. Legal recognition of AI and the establishment of procedural safeguards are essential. The novelty of the study lies in the development of concrete approaches to the introduction of artificial intelligence technologies into criminal procedure, taking into account Kazakhstan’s practical experience with the digitalization of criminal case management. Unlike existing research, which examines AI in the legal profession primarily from a theoretical perspective, this work proposes detailed mechanisms for integrating models and algorithms into the processing of criminal cases. The implementation of AI in criminal justice enhances the efficiency, transparency, and accuracy of case handling by automating document preparation, data analysis, and monitoring compliance with procedural deadlines. At the same time, several constraints persist, including dependence on the quality of training datasets, the impossibility of fully replacing human legal judgment, and the need to uphold the principles of the presumption of innocence, the right to privacy, and algorithmic transparency. The findings of the study underscore the potential of AI, provided that procedural safeguards are strictly observed and competent authorities exercise appropriate oversight. Two potential approaches are outlined: selective amendments to the Criminal Procedure Code concerning rights protection, privacy, and judicial powers; or adoption of a separate provision on digital technologies and AI. Implementation of these measures would create a balanced legal framework that enables effective use of AI while preserving core procedural guarantees.

  • Research Article
  • 10.17323/2713-2749.2025.2.69.86
Trust in Artificial Intelligence: Regulatory Challenges and Prospects
  • Jul 2, 2025
  • Legal Issues in the Digital Age
  • Svetlana Vashurina

The last few years have witnessed a rapid penetration of artificial intelligence (AI) into different walks of life including medicine, judicial system, public governance and other important activities. Despite multiple benefits of these technologies, their widespread dissemination raises serious concerns as to whether they are trustworthy. The article provides an analysis of the key factors behind public mistrust in AI while discussing ways to build confidence. To understand the reasons of mistrust, the author invokes the historical context, social study findings as well as judicial practices. A special focus is made on the security of AI use, AI visibility to users and on decision-making responsibility. The author also discusses the current regulatory models in this area including the development of universally applicable legal framework, regulatory sandboxes and self-regulation mechanisms for the sector, with multidisciplinary collaboration and adaptation of the effective legal system to become a key factor of this process. Only this approach will producer a balanced development and use of AI systems in the interest of all stakeholders, from their vendors to end users. For a more exhaustive coverage of this subject, the following general methods are proposed: analysis, synthesis and systematization; special legal (comparative legal and historic legal) research methods. In analyzing the available data, the author argues for a comprehensive approach to make AI trustworthy. The following hypothesis is proposed based on the study’s findings. Trust in AI is a cornerstone of efficient regulation of AI development and use in various areas. The author is convinced that, with AI made transparent, safe and reliable one, provided with human oversight through adequate regulation, the government will maintain purposeful collaboration between man and technologies thus setting the stage for AI use in critical infrastructures affecting life, health and basic rights and interests of individuals.

  • Research Article
  • 10.51799/2763-8685v5n1013
EL IMPACTO DE LA INTELIGENCIA ARTIFICIAL EN LOS DERECHOS FUNDAMENTALES EN LAS RELACIONES LABORALES: regulación vigente y nuevos desafíos
  • Jun 1, 2025
  • Latin American Journal of European Studies
  • Ana Rosa Rodriguez + 1 more

This article analyzes the impact of artificial intelligence (AI) on fundamental rights within labor relations, in a context marked by increasing automation, digital surveillance, and intensive use of personal data. Based on the hypothesis that, without guarantees of privacy and cybersecurity, fair working conditions and an equitable digital market cannot be sustained, the paper examines the ethical and regulatory challenges posed by the integration of algorithmic technologies and neurotechnologies in the workplace. Special attention is given to the European Union’s Artificial Intelligence Act (AI Act) and the General Data Protection Regulation (GDPR), as key legal frameworks seeking to balance innovation with the protection of human rights. Through a qualitative methodology grounded in legal and bibliographic analysis and case studies, the study highlights the urgent need to establish boundaries on practices such as emotion recognition, mass biometric surveillance, and opaque automated decision-making. The article underscores the importance of strengthening transparency, human oversight, and the development of neuro-rights as emerging dimensions of protection against the new risks posed by AI in the workplace.

  • Research Article
  • 10.9734/jamps/2025/v27i6788
Artificial Intelligence in Medical Research: Ethical and Regulatory Challenges in Developing Economies
  • Jun 11, 2025
  • Journal of Advances in Medical and Pharmaceutical Sciences
  • Charles Ntungwen Fokunang + 28 more

Introduction: Clinical research is a key area in which the use of AI in healthcare data seen a significant increase, even though met with great ethical, legal and regulatory challenges. Artificial Intelligence (AI) concerns the ability of algorithms encoded in technology to learn from data, to be able to perform automated tasks without every step in the process being explicitly to be programmed by a human. AI development relies on big data collected from clinical trials to train algorithms, that requires careful consideration of consent, data origin and ethical standards. When data is acquired from third-party sources, transparency about collection methods, geographic origin and anonymization standards becomes critical. While consent forms used in clinical trials can offer clearer terms for data use, ambiguity remains about how this data can be reused for AI purposes after the trial ends. There are very few or no laws on the use of AI especially in developing countries. Also, there are a lot of misconceptions on the global use of AI. Statement of Objectives: Artificial intelligence as an innovative technology has contributed to a shift in paradigm in conducting clinical research. Unfortunately, AI faces ethical, and regulatory challenges especially in limited resource countries where the technology is still to be consolidated. One of the main concerns of AI involves data re-identification, in which anonymized data can potentially be traced back to individuals, especially when linked with other datasets. Data ownership is also a complex and often controversial area within the healthcare sector. AI developers needs to clearly explain the value of data collection to hospitals and cybersecurity teams to ensure that they understand how the data will be secured and used ethically Methodology: The World Health Organization (WHO) recognizes that AI holds great promise for clinical health research and in the practice of medicine, biomedical and pharmaceutical sciences. WHO also recognizes that, to fully maximize the contribution of AI, there is the need to address the ethical, legal and regulatory challenges for the health care systems, practitioners and beneficiaries of medical and public health services. In this study we have pulled data from accessible websites, peered reviewed open-access publications that deal with the ethical and regulatory concerns of AI, that we have discussed in this writeup. We have attempted to place our focus on the development of AI and applications with particular bias in the ethical and regulatory concerns. We have discussed and given an insight on whether AI can advance the interests of patients and communities within the framework of collective effort to design and implement ethically defensible laws and policies and ethically designed AI technologies. Finally, we have investigated the potential serious negative consequences of ethical principles and human rights obligations if they are not prioritized by those who fund, design, regulate or use AI technologies for health research. Results: From our data mining and access to multiple documentations, vital information has been pooled together by a systematic online search to show that AI is contributing significantly in the growth of global clinical research and advancement of medicine. However, we observed many ethical and regulatory challenges that has impacted health research in developing economies. Ethical challenges include AI and human rights, patient’s privacy, safety and liability, informed consent and data ownership, bias and fairness. For the legal and regulatory challenges, we observed issues with data security compliance, data monitoring and maintenance, transparency and accountability, data collection, data storage and use. The role of third-party vendors in AI healthcare solutions and finally AI development and integration into the health systems has also been reviewed. Conclusion: The advancement of AI, coupled with the innovative digital health technology has made a significant contribution to address some challenges in clinical research, within the domain of medicine, biomedical and pharmaceutical products development. Despite the challenging ethical and regulatory challenges AI has impacted significant innovation and technology in clinical research, especially within the domain of drug discovery and development, and clinical trials studies.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.