Ethics and privacy in AI regulation: Navigating challenges and strategies for compliance

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A new summer of artificial intelligence (AI) started a year ago, promising tantalising technical development and efficiencies of scale, while in parallel the Internet is flooded with advice, notes and analysis of AI’s impact and risks. Although the potential use of AI is promising and could help solve very real human challenges, the risks and societal impact are real too. With AI infiltrating all areas of life, such as online platforms, work, healthcare, social services and the justice system, it is essential that it is developed within key safety parameters. Furthermore, it is no secret that for AI to be effective it needs to process vast quantity of data, which is at odds with the General Data Protection Regulation (GDPR) principles of data minimisation. Businesses are repeatedly told to mitigate such risks on fundamental rights, privacy, discrimination, biases, etc. with stringent privacy and AI governance, all within an ethical framework and in compliance with existing legislation. Among the bombardment of information, this paper seeks to provide practical guidelines to comply with existing privacy regulation while implementing safe and trustworthy AI. The first part considers compliance with the GDPR while developing or using AI, while the second part provides practical recommendations in relation to the implementation of an ethical AI framework.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.37497/sdgs.v6istudies.35
The role of the European Union in shaping an ethical and legal framework for artificial intelligence (AI) in education
  • Jun 9, 2025
  • SDGs Studies Review
  • Andreea Dragomir

Purpose: This article analyzes the role of the European Union (EU) in shaping an ethical and legal framework for the use of artificial intelligence (AI) in education. It investigates how European institutions aim to ensure trustworthy, transparent, and human-centered AI, while also addressing the challenges of implementation across Member States. Methodology: The study adopts a normative and documentary research design, drawing on EU policy strategies, legislative initiatives, and ethical guidelines. It further includes a case study of Romania, examining the extent to which European orientations are reflected in national education systems, with a focus on institutional readiness, digital capacity, and teacher training. Findings: The analysis reveals a gap between high-level EU strategies—such as the proposed AI Act, the Digital Education Action Plan, and the Ethics Guidelines for Trustworthy AI—and the practical preparedness of Member States. Romania exemplifies these challenges, showing deficiencies in digital infrastructure, lack of teacher training, and absence of clear ethical standards. These discrepancies highlight the risks of fragmented governance and inconsistent adoption of AI in education. Originality/Contribution: By combining normative analysis with a country-level case study, the article contributes to the academic debate on AI governance in education. It demonstrates the tension between innovation and fundamental rights and provides insights into the institutional and ethical conditions necessary for effective implementation. Practical Implications: The study offers policy recommendations to strengthen teacher training, ensure algorithmic transparency, and establish certification and oversight mechanisms. It underscores the need for coordinated governance to safeguard equity, trust, and democratic values in the integration of AI in education.

  • Research Article
  • Cite Count Icon 7
  • 10.54660/.ijfmr.2021.2.1.43-55
Digital Transformation and Data Governance: Strategies for Regulatory Compliance and Secure AI-Driven Business Operations
  • Jan 1, 2021
  • Journal of Frontiers in Multidisciplinary Research
  • James Paul Onoja + 5 more

Digital transformation has redefined business operations, driving efficiency, innovation, and competitiveness through artificial intelligence (AI) and advanced analytics. However, the rapid adoption of AI-driven processes introduces significant regulatory and security challenges, necessitating robust data governance frameworks to ensure compliance, mitigate risks, and protect sensitive information. This study explores the intersection of digital transformation and data governance, highlighting strategies for regulatory compliance and secure AI-driven business operations. The paper first examines the evolving landscape of AI regulation, emphasizing global frameworks such as the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and emerging AI governance policies. It underscores the critical role of compliance in mitigating data privacy concerns, ensuring transparency, and fostering ethical AI implementation. Next, the study explores data governance strategies essential for AI-driven enterprises. These strategies include data classification, access control mechanisms, encryption protocols, and real-time auditing to enhance data integrity and security. The importance of explainable AI (XAI) is also discussed, demonstrating how organizations can achieve regulatory alignment while maintaining AI model interpretability. Furthermore, the research highlights best practices for integrating digital transformation initiatives with data governance frameworks. It presents case studies on AI-driven businesses that have successfully implemented compliance-driven operational models, showcasing how enterprises can balance innovation with regulatory adherence. Key elements such as risk-based approaches, third-party data audits, and compliance automation tools are analyzed. Finally, the paper provides insights into future trends in AI governance, predicting the increasing convergence of digital transformation, AI ethics, and regulatory policies. As AI adoption accelerates, enterprises must adopt proactive data governance frameworks to address security vulnerabilities, regulatory obligations, and ethical considerations. This study serves as a comprehensive guide for organizations navigating the complexities of digital transformation while ensuring data security, regulatory compliance, and responsible AI implementation. By integrating strategic data governance practices, businesses can unlock AI's full potential while safeguarding consumer trust and regulatory alignment.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 4
  • 10.12688/openreseurope.15145.1
Voicing challenges: GDPR and AI research
  • Nov 23, 2022
  • Open Research Europe
  • Katherine Quezada-Tavarez + 2 more

EU data protection rules could be difficult for researchers to navigate, particularly when processing massive datasets containing personal data for Artificial Intelligence (AI) developments. This article examines how data protection intersects with AI research to elucidate the issues arising from the use of large-scale databases containing personal data to train, test and validate AI systems. The key objectives of this work are to (1) scrutinise the data protection requirements and limits for the processing of personal data in AI research, (2) reflect on possible complications regarding data quality requirements for trustworthy AI and General Data Protection Regulation (GDPR) compliance, and (3) present possible ways forward to reconcile GDPR requirements and AI research. While reviewing and mapping relevant provisions and guidance, we identify data protection challenges posed by the use of massive databases containing personal data for AI research. The findings suggest that, while the legal regime for research under the GDPR resolves some of the challenges identified, others, such as legal basis for processing and processing of special categories of data, remain unaddressed. We argue that the nature of these complications will make it difficult for EU researchers to advance in trustworthy AI efforts. The analysis concludes by suggesting possible ways to tackle the remaining issues.

  • Research Article
  • Cite Count Icon 387
  • 10.1016/j.inffus.2023.101896
Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation
  • Jun 23, 2023
  • Information Fusion
  • Natalia Díaz-Rodríguez + 5 more

Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation

  • Research Article
  • 10.65521/itsi-teee.v12i1.143
Ethical Considerations in AI Governance: Towards Responsible AI Development
  • Apr 15, 2025
  • ITSI Transactions on Electrical and Electronics Engineering
  • Kevin Sinclair + 1 more

The proliferation of artificial intelligence (AI) technologies has brought about transformative advancements across various domains, revolutionizing industries, and reshaping societal interactions. However, alongside the promises of AI-driven innovation, concerns regarding ethical implications, fairness, transparency, and accountability have gained prominence, necessitating a concerted effort towards responsible AI development and governance. This paper examines the ethical considerations inherent in AI governance, aiming to elucidate the principles, frameworks, and best practices for fostering ethical AI deployment and mitigating potential risks. We delve into key ethical challenges, including bias and discrimination, privacy infringement, algorithmic transparency, and societal impact, exploring the multifaceted dimensions of ethical AI design, deployment, and regulation. Moreover, we discuss emerging regulatory initiatives, industry standards, and interdisciplinary collaborations aimed at promoting ethical AI governance and ensuring alignment with societal values and human rights. Through this comprehensive review, we aim to contribute to the ongoing discourse on responsible AI development and empower stakeholders to navigate the ethical complexities of AI-driven technologies in an increasingly interconnected and AI-enabled world.

  • Research Article
  • Cite Count Icon 24
  • 10.1080/13669877.2024.2350720
Possible harms of artificial intelligence and the EU AI act: fundamental rights and risk
  • May 2, 2024
  • Journal of Risk Research
  • Isabel Kusche

Various actors employ the notion of risk when they discuss the future role of Artificial Intelligence (AI) in society – sometimes as a general pointer to possible unwanted consequences of the underlying technologies, sometimes oriented towards a political regulation of AI risks. Mostly discussed within a legal or ethical framework, we still lack a perspective on AI risks based on sociological risk research. Building on systems-theoretical thinking about risk and society, this article analyses the potential and limits of a risk-based regulation of AI, in particular with regard to the notion of harm to fundamental rights. Drawing on the AI Act, its earlier drafts and related documents, the paper analyses how this regulatory framework delineates harms of AI and which implications the chosen delineation has for the regulation. The results show that fundamental rights are invoked as legal rules, as values and as a foundation for trustworthiness of AI in parallel to being identified as at risk from AI. The attempt to frame all possible harms in terms of fundamental rights creates communicative paradoxes. It opens the door to a political classification of high-risk AI systems as well as a future standard-setting that is removed from systematic concerns about fundamental rights and values. The additional notion of systemic risk, addressing possible risks from general-purpose AI models, further reveals the problems with delineating harms of AI. In sum, the AI Act is unlikely to achieve what it aims to do, namely the creation of conditions for trustworthy AI.

  • Single Book
  • 10.62311/nesx/97891
Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems
  • Mar 14, 2025
  • Murali Krishna Pasupuleti

Abstract: As Artificial Intelligence (AI) advances, so do the risks associated with deepfakes, misinformation, and algorithmic bias, posing significant threats to security, privacy, democracy, and societal trust. "Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems" provides a comprehensive analysis of AI security vulnerabilities, adversarial machine learning, AI-driven misinformation, and bias in automated decision-making. The book explores how AI-generated synthetic media, data poisoning attacks, and biased algorithms are being weaponized for cyber fraud, political manipulation, and unethical automation. It delves into defensive strategies, AI forensic tools, cryptographic AI verification, and fairness-aware machine learning techniques to combat these emerging threats. Additionally, the book examines global AI regulations, governance frameworks, and ethical deployment standards that ensure transparency, accountability, and security in AI-driven ecosystems. Through real-world case studies, technical insights, and policy recommendations, this book serves as an essential resource for AI researchers, cybersecurity professionals, policymakers, and technology leaders aiming to develop trustworthy AI systems that resist adversarial manipulation, misinformation campaigns, and algorithmic bias while fostering fair, transparent, and secure AI adoption. Keywords: AI security, adversarial machine learning, deepfake detection, AI-generated misinformation, synthetic media, bias mitigation, AI ethics, AI governance, trustworthy AI, explainable AI (XAI), fairness-aware machine learning, cryptographic AI, federated learning security, digital forensics, algorithmic bias, data poisoning attacks, model robustness, cybersecurity in AI, misinformation detection, deep learning security, AI regulatory policies, zero-trust AI, blockchain-based content verification, ethical AI deployment, secure AI frameworks, AI transparency, AI-driven cyber threats, fake news detection, AI fraud prevention.

  • Research Article
  • 10.2478/picbe-2025-0191
Global Perspectives on Digital and AI Legislation: A Comparative Study of Data Protection, AI Governance, and Healthcare Innovations with a Focus on Romania
  • Jul 1, 2025
  • Proceedings of the International Conference on Business Excellence
  • Cristian Constantin Francu + 1 more

Digital and artificial intelligence (AI) technologies are reshaping governance, requiring adaptive regulatory frameworks to ensure data privacy, digital identity security, and AI ethics. This study examines global approaches to AI and data governance, focusing on the European Union’s (EU) General Data Protection Regulation (GDPR) and AI Act, compared to regulatory structures in the United States (US). Romania serves as a case study to assess national implementation challenges and sector-specific impacts, particularly in healthcare. Using a mixed-methods approach combining legislative analysis, comparative study, and sectoral case examination, the research highlights key takeaways: Romania’s progress in AI-driven healthcare solutions, the necessity of tailored digital infrastructure investments, and the role of government ordinances in ensuring compliance. Policy recommendations emphasize public-private collaboration, regulatory adaptation, and targeted sectoral strategies to enhance Romania’s AI governance while aligning with EU standards.

  • Research Article
  • 10.1163/22112987-bja00004
AI Governance in Saudi Arabia: Cultural Values and Ethical AI Regulations in Comparative Perspective
  • Apr 10, 2025
  • Yearbook of Islamic and Middle Eastern Law Online
  • Beata Polok + 1 more

This country survey examines Saudi Arabia’s approach to artificial intelligence (AI) governance, focusing on the regulatory and ethical frameworks that shape its AI ecosystem. The study situates Saudi Arabia’s AI policies within the broader context of Vision 2030, emphasising the role of the Saudi Data and Artificial Intelligence Authority (SDAIA) in developing guidelines for AI ethics and generative AI applications. The Kingdom’s AI strategy is characterised by a balance between cultural values, international AI ethics standards, and economic development goals. Unlike rigid regulatory models, Saudi Arabia’s AI governance adopts a flexible, principle-based approach, incorporating voluntary compliance incentives such as motivational badges. The survey also contrasts Saudi Arabia’s AI governance with other major regulatory models, including those of the European Union, the United States, and China. The findings highlight the Kingdom’s goal to position itself as a global AI hub while ensuring alignment with national priorities and ethical considerations.

  • Research Article
  • Cite Count Icon 2
  • 10.1108/tg-08-2025-0240
Generative AI and the urban AI policy challenges ahead: Trustworthy for whom?
  • Dec 4, 2025
  • Transforming Government: People, Process and Policy
  • Igor Calzada

Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.

  • Research Article
  • Cite Count Icon 389
  • 10.1098/rsta.2018.0080
Governing artificial intelligence: ethical, legal and technical opportunities and challenges.
  • Oct 15, 2018
  • Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • Corinne Cath

This paper is the introduction to the special issue entitled: 'Governing artificial intelligence: ethical, legal and technical opportunities and challenges'. Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance.This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.

  • Research Article
  • Cite Count Icon 5
  • 10.3205/zma001702
Legal aspects of generative artificial intelligence and large language models in examinations and theses.
  • Jan 1, 2024
  • GMS journal for medical education
  • Maren März + 3 more

The high performance of generative artificial intelligence (AI) and large language models (LLM) in examination contexts has triggered an intense debate about their applications, effects and risks. What legal aspects need to be considered when using LLM in teaching and assessment? What possibilities do language models offer? Statutes and laws are used to assess the use of LLM: - University statutes, state higher education laws, licensing regulations for doctors - Copyright Act (UrhG) - General Data Protection Regulation (DGPR) - AI Regulation (EU AI Act) LLM and AI offer opportunities but require clear university frameworks. These should define legitimate uses and areas where use is prohibited. Cheating and plagiarism violate good scientific practice and copyright laws. Cheating is difficult to detect. Plagiarism by AI is possible. Users of the products are responsible. LLM are effective tools for generating exam questions. Nevertheless, careful review is necessary as even apparently high-quality products may contain errors. However, the risk of copyright infringement with AI-generated exam questions is low, as copyright law allows up to 15% of protected works to be used for teaching and exams. The grading of exam content is subject to higher education laws and regulations and the GDPR. Exclusively computer-based assessment without human review is not permitted. For high-risk applications in education, the EU's AI Regulation will apply in the future. When dealing with LLM in assessments, evaluation criteria for existing assessments can be adapted, as can assessment programmes, e.g. to reduce the motivation to cheat. LLM can also become the subject of the examination themselves. Teachers should undergo further training in AI and consider LLM as an addition.

  • Book Chapter
  • 10.4018/979-8-3373-3384-7.ch005
Regulation and Ethical Issues of Artificial Intelligence in Ghana
  • Mar 26, 2025
  • Elijah Tukwariba Yin

This chapter investigates the legal and ethical frameworks of artificial intelligence (AI) in Ghana, emphasizing how the country can draw upon the EU AI Act 2024 to influence a harmonious AI legal regime. It is argued that the effective regulation of AI in Ghana requires a harmonious legal framework that addresses ethical concerns and human rights, transparency, and accountability. Using a desktop review, it is evident that Ghana's expanding AI landscape requires a well-aligned regulatory approach that prioritizes human well-being, safety, and dignity. Such a framework ensures that AI technologies respect citizens' rights and enhance public safety. By categorizing AI systems based on risk levels, Ghana can mitigate potential harmful practices while fostering innovation and trust. Incorporating ethical frameworks into AI governance is vital for effective regulation, as it offers clear guidelines that align with human values. This alignment promotes global cooperation and safeguards human interests in society.

  • Research Article
  • 10.59613/global.v2i7.234
The Legal Implications of Data Protection Laws, AI Regulation, and Cybersecurity Measures on Privacy Rights in 2024
  • Jul 25, 2024
  • Global International Journal of Innovative Research
  • Dharma Setiawan Negara + 4 more

This study explores the legal implications of data protection laws, artificial intelligence (AI) regulation, and cybersecurity measures on privacy rights in 2024. The primary objective is to qualitatively analyze how recent advancements and legislative changes in these areas impact individual privacy rights and shape the legal landscape for data protection. The research employs a qualitative literature review methodology, synthesizing findings from academic articles, legal texts, policy papers, and case studies to provide a comprehensive understanding of the evolving legal challenges and implications for privacy rights. The literature review methodology involves systematically collecting and analyzing a wide range of scholarly sources on data protection, AI regulation, and cybersecurity. The study categorizes the literature into key themes, such as the effectiveness of current data protection laws, the ethical and legal considerations of AI, and the impact of cybersecurity measures on personal data security. Through a thematic analysis, the research identifies the intersection of these legal areas and their collective influence on privacy rights. The findings reveal that recent data protection laws, such as the General Data Protection Regulation (GDPR) and emerging national legislations, have significantly enhanced individual control over personal data and accountability for data breaches. However, the rapid advancement of AI technologies poses new challenges for privacy, including concerns about data bias, algorithmic transparency, and the ethical use of personal information. Cybersecurity measures are essential for protecting data integrity and preventing unauthorized access, yet they also raise issues related to surveillance and the potential infringement of privacy rights.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.