Different fields, different appropriateness? Unpacking emerging normativity in China’s AI governance
How does a state initially form its stance on artificial intelligence (AI) governance before projecting it onto the international stage? Drawing on field theory, this article argues that such formation is both a pluralistic process – where diverse normativities, or understandings of appropriateness, emerge across intersecting yet distinct fields – and a dynamic one, shaped by competition among various actors seeking to influence the state’s overall approach. Using China as a least-likely case study, this article finds that different normativities of AI governance emerge across three fields: security, technology and diplomacy. These variations are driven by two factors: (1) the specific issues at stake within each field and (2) the dominant actors whose practices produce these normativities. It introduces two mechanisms to explain how these diverse normativities combine to shape China’s overarching, state-level normativity regarding AI governance. This analysis extends a Bourdieu-informed perspective on norm research, showing the complexity and fluidity of emerging normativities as well as the hierarchies and power relations that shape them. It also offers empirical insights for working with China on prospective global AI governance frameworks.
- Research Article
107
- 10.1007/s43681-022-00143-x
- Feb 24, 2022
- AI and Ethics
Artificial intelligence (AI) governance is required to reap the benefits and manage the risks brought by AI systems. This means that ethical principles, such as fairness, need to be translated into practicable AI governance processes. A concise AI governance definition would allow researchers and practitioners to identify the constituent parts of the complex problem of translating AI ethics into practice. However, there have been few efforts to define AI governance thus far. To bridge this gap, this paper defines AI governance at the organizational level. Moreover, we delineate how AI governance enters into a governance landscape with numerous governance areas, such as corporate governance, information technology (IT) governance, and data governance. Therefore, we position AI governance as part of an organization’s governance structure in relation to these existing governance areas. Our definition and positioning of organizational AI governance paves the way for crafting AI governance frameworks and offers a stepping stone on the pathway toward governed AI.
- Research Article
1
- 10.9790/0661-2605031925
- Oct 1, 2024
- IOSR Journal of Computer Engineering
Africa as a continent recognizes the need for Artificial Intelligence (AI) governance frameworks. However, despite initiatives like the African Union's Continental AI Strategy, African nations grappled with challenges in formulating comprehensive AI policies, while established global frameworks provided advanced benchmarks for comparison. To address this disparity, this study conducted a comparative analysis of AI governance in Africa relative to global standards and practices. Employing a comprehensive document analysis methodology, the research examined key policy documents, strategic frameworks, and regulatory guidelines across African, European, American, and Asian contexts. The findings revealed that while the African Union demonstrated commitment to coordinated AI governance, African approaches generally lagged behind global benchmarks in comprehensiveness, formalization, and ethical integration. The study identified a notable fragmentation in AI governance across African nations, contrasting with more unified approaches in other regions. African frameworks emphasized leveraging AI for socio-economic development, diverging from the risk mitigation focus seen in EU regulations. The integration of indigenous African ethical perspectives in AI governance frameworks was limited, presenting both challenges and opportunities. Significant disparities in digital infrastructure and AI capacity between Africa and other regions were found to affect governance implementation. The study concluded that despite these challenges, there was potential for Africa to develop innovative, context-specific AI governance models that could contribute valuable insights to the global discourse on responsible AI development. Recommendations included accelerating the implementation of the Continental AI Strategy, prioritizing investment in digital infrastructure, developing Africa-centric AI ethics frameworks, establishing mechanisms for regular benchmarking against global standards, fostering increased collaboration, and implementing AI literacy programs across the continent.
- Research Article
17
- 10.1098/rsos.231994
- Aug 1, 2024
- Royal Society open science
Global artificial intelligence (AI) governance must prioritize equity, embrace a decolonial mindset, and provide the Global South countries the authority to spearhead solution creation. Decolonization is crucial for dismantling Western-centric cognitive frameworks and mitigating biases. Integrating a decolonial approach to AI governance involves recognizing persistent colonial repercussions, leading to biases in AI solutions and disparities in AI access based on gender, race, geography, income and societal factors. This paradigm shift necessitates deliberate efforts to deconstruct imperial structures governing knowledge production, perpetuating global unequal resource access and biases. This research evaluates Sub-Saharan African progress in AI governance decolonization, focusing on indicators like AI governance institutions, national strategies, sovereignty prioritization, data protection regulations, and adherence to local data usage requirements. Results show limited progress, with only Rwanda notably responsive to decolonization among the ten countries evaluated; 80% are 'decolonization-aware', and one is 'decolonization-blind'. The paper provides a detailed analysis of each nation, offering recommendations for fostering decolonization, including stakeholder involvement, addressing inequalities, promoting ethical AI, supporting local innovation, building regional partnerships, capacity building, public awareness, and inclusive governance. This paper contributes to elucidating the challenges and opportunities associated with decolonization in SSA countries, thereby enriching the ongoing discourse on global AI governance.
- Research Article
- 10.56345/ijrdv12n3s111
- Dec 25, 2025
- Interdisciplinary Journal of Research and Development
Rapid digital transformation has resulted in a paradigm shift, creating the need for shaping effective AI (artificial intelligence) governance. AI governance encompasses laws, policies, frameworks, and practices at global, regional, national, and organizational levels. Evaluating the impact of AI requires addressing ethical considerations. The need for appropriate AI governance has been highlighted by the United Nations (UN), which has access to existing normative and policy instruments, such as international standards. Global efforts in the UN system regarding AI governance are grounded in international law.AI governance takes a pragmatic approach and is delivered through an ecosystem that includes research, development, coordination, monitoring, evaluation, capacity building, and stakeholder engagement. Effective AI governance is essential in shaping the future of AI. At the European Union (EU) level, significant progress has been made, particularly with the AI Act, which is the first-ever regulation specifically on AI. The AI Act follows a risk-based approach and categorizes AI systems into four risk levels. It applies not only to EU-based providers but also to those outside the EU, particularly when their AI inputs are used within the EU.As Europe’s geopolitical influence remains crucial, particularly in light of recent developments, AI governance must be addressed from a multi-stakeholder perspective. The aim of this paper is to identify the current challenges in shaping AI governance in Europe through this approach. The study employs a qualitative, case-study methodology, analyzing the roles and needs of key stakeholders, including governments, regulatory bodies, international institutions, AI engineers, ethicists, industry associations, and end-users.
- Single Book
- 10.62311/nesx/97891
- Mar 14, 2025
Abstract: As Artificial Intelligence (AI) advances, so do the risks associated with deepfakes, misinformation, and algorithmic bias, posing significant threats to security, privacy, democracy, and societal trust. "Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems" provides a comprehensive analysis of AI security vulnerabilities, adversarial machine learning, AI-driven misinformation, and bias in automated decision-making. The book explores how AI-generated synthetic media, data poisoning attacks, and biased algorithms are being weaponized for cyber fraud, political manipulation, and unethical automation. It delves into defensive strategies, AI forensic tools, cryptographic AI verification, and fairness-aware machine learning techniques to combat these emerging threats. Additionally, the book examines global AI regulations, governance frameworks, and ethical deployment standards that ensure transparency, accountability, and security in AI-driven ecosystems. Through real-world case studies, technical insights, and policy recommendations, this book serves as an essential resource for AI researchers, cybersecurity professionals, policymakers, and technology leaders aiming to develop trustworthy AI systems that resist adversarial manipulation, misinformation campaigns, and algorithmic bias while fostering fair, transparent, and secure AI adoption. Keywords: AI security, adversarial machine learning, deepfake detection, AI-generated misinformation, synthetic media, bias mitigation, AI ethics, AI governance, trustworthy AI, explainable AI (XAI), fairness-aware machine learning, cryptographic AI, federated learning security, digital forensics, algorithmic bias, data poisoning attacks, model robustness, cybersecurity in AI, misinformation detection, deep learning security, AI regulatory policies, zero-trust AI, blockchain-based content verification, ethical AI deployment, secure AI frameworks, AI transparency, AI-driven cyber threats, fake news detection, AI fraud prevention.
- Research Article
15
- 10.1111/1475-6765.12570
- Nov 27, 2022
- European Journal of Political Research
How much do citizens support artificial intelligence (AI) in government and politics at different levels of decision‐making authority and to what extent is this AI support associated with citizens’ conceptions of democracy? Using original survey data from Germany, the analysis shows that people are overall sceptical toward using AI in the political realm. The findings suggest that how much citizens endorse democracy as liberal democracy as opposed to several of its disfigurations matters for AI support, but only in high‐level politics. While a stronger commitment to liberal democracy is linked to lower support for AI, the findings contradict the idea that a technocratic notion of democracy lies behind greater acceptance of political AI uses. Acceptance is higher only among those holding reductionist conceptions of democracy which embody the idea that whatever works to accommodate people's views and preferences is fine. Populists, in turn, appear to be against AI in political decision making.
- Conference Article
13
- 10.1145/3396956.3396971
- Jun 15, 2020
In recent years, the topic of artificial intelligence in government has become a major area of study. Governments have been eager to adopt artificial intelligence for a number of purposes, including for the prediction of risk in social services. Child protection services are exploring predictive analytics for the initial screening of cases. While research identifies data quality issues as a major barrier, little is known about the characteristics of these issues in child protection, their relationship to organizational memory contained in administrative data, and their impact on the ability of an organization to adopt these technologies. This study gained insight into the socio-technical limitations of duplicate records when trying to bring organizational memory to bear in predictive decision support by interviewing and observing staff use of information technology systems. The study's findings suggest that record duplication in case management systems in child protection could pose a significant challenge to the introduction of artificial intelligence technologies such as predictive analytics for decision assistance. There is a need to address foundational information management and system issues before artificial intelligence approaches such as this can be introduced in the child protection sector.
- Research Article
35
- 10.1007/s43681-022-00205-0
- Aug 15, 2022
- AI and Ethics
Artificial intelligence (AI) governance is anticipated to have a transformative impact on humanity which has prompted researchers to analyze its implementation and use to ensure that the technology advances ethically and is beneficial for society. Though countries have begun to develop governance initiatives to regulate AI, the number of emerging AI regimes with an established structure is still relatively low. Meanwhile, the technology is advancing rapidly and has already caused harm inequitably to underrepresented communities. Thus, there is an urgent need to establish robust governance to mitigate the issues and risks attendant when deploying AI.While numerous ethics, principles, and structures have been recommended, this article intends to address the policy lag by providing policymakers with a simple and compelling AI governance framework that situates AI principles as the guiding baseline for developing and evaluating policies. Rather than devising new policy recommendations, the most recent (at the time of writing) and comprehensive governance documents from China, the European Union, and the United States were systematically selected, and examined in a comparative analysis to study how the three regimes address AI principles. Based on the comparative analysis, the most comprehensive and effective recommendations were selected to produce seven broad policy recommendations. The governance framework and recommendations are intentionally broad so that they can be adapted to adequately address AI principles across diverse contexts, encouraging the implementation of AI principles, increasing the likelihood of beneficial AI, and reducing the risks and harms associated with the technology. Nevertheless, the recommendations provided should not be considered exhaustive as the technology has an immense reach and new AI governance initiatives are developing continuously in this growth period in AI governance. It is thus essential for policymakers to survey the most current and relevant governance landscape to identify the best practices that are suitable for their specific context and need.
- Research Article
12
- 10.1109/mc.2020.3010043
- Oct 1, 2020
- Computer
The articles in this special section focus on government applications that use artificial intelligence (AI). The repercussions of artificial intelligence (AI) in government are broad and significant. The characteristics of these technologies will have an impact on almost everything in public organizations, from governance or the multidimensional perspective of interoperability, to the organizational or social implications linked to concepts like public value, transparency, or accountability. This special issue seeks to shed light on foundations and key elements to be taken into account for AI adoption by public organizations. Governments are the primary enablers of technology and market stimulators and regulators of general activities in our society. Governments have always sought the common good and, therefore, the advancement of public and collective interests. This is key to understanding, as a first step, why the principles of public-sector organizations do not always match those of the private sector. Public and private perspectives are very different, whether they be management, strategy, or policy.
- Research Article
20
- 10.1177/00208523231187051
- Aug 8, 2023
- International Review of Administrative Sciences
This research proposes a framework for the negative impacts of artificial intelligence (AI) in government by classifying 14 topics of its dark side into five socio technical categories. The framework is based on a systematic literature review and highlights that the dark side is predominantly driven by political, legal, and institutional aspects, but it is also influenced by data and technology. Lack of understanding of AI outcomes, biases, and errors, as well as manipulation of intelligent algorithms and cognitive machines are contributing factors. The public sector should create knowledge about AI from an ethical, inclusive, and strategic perspective, involving experts from different areas. Points for practitioners Government officials and other decision-makers should be aware of the potential benefits of artificial intelligence, but also of the dark side, and try to avoid those potential negative consequences.
- Research Article
- 10.1163/22112987-bja00004
- Apr 10, 2025
- Yearbook of Islamic and Middle Eastern Law Online
This country survey examines Saudi Arabia’s approach to artificial intelligence (AI) governance, focusing on the regulatory and ethical frameworks that shape its AI ecosystem. The study situates Saudi Arabia’s AI policies within the broader context of Vision 2030, emphasising the role of the Saudi Data and Artificial Intelligence Authority (SDAIA) in developing guidelines for AI ethics and generative AI applications. The Kingdom’s AI strategy is characterised by a balance between cultural values, international AI ethics standards, and economic development goals. Unlike rigid regulatory models, Saudi Arabia’s AI governance adopts a flexible, principle-based approach, incorporating voluntary compliance incentives such as motivational badges. The survey also contrasts Saudi Arabia’s AI governance with other major regulatory models, including those of the European Union, the United States, and China. The findings highlight the Kingdom’s goal to position itself as a global AI hub while ensuring alignment with national priorities and ethical considerations.
- Research Article
38
- 10.1016/j.techsoc.2021.101675
- Jul 22, 2021
- Technology in Society
Factors influencing the use of artificial intelligence in government: Evidence from China
- Research Article
8
- 10.1016/j.telpol.2023.102673
- Oct 16, 2023
- Telecommunications Policy
The influence of China in AI governance through standardisation
- Research Article
90
- 10.1108/intr-01-2022-0042
- Jun 27, 2023
- Internet Research
PurposeFollowing the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.Design/methodology/approachThe authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).FindingsThe results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.Research limitations/implicationsTo address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.Practical implicationsFor practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.Social implicationsFor society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.Originality/valueBy delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.
- Research Article
- 10.31703/gssr.2025(x-iii).29
- Nov 18, 2025
- Global Social Sciences Review
This paper examines the evolving framework of domestic Artificial Intelligence (AI) governance in the United States and its implications for national security and global stability. As AI technologies advance rapidly, the U.S. faces increasing pressure to balance innovation, ethical regulation, and security imperatives. The study explores key policy mechanisms, institutional responses, and strategic initiatives shaping AI governance, including federal oversight, private-sector collaboration, and defense applications. It also assesses how domestic governance decisions influence international norms, competition, and cooperation in AI development. Through a multidisciplinary analysis combining policy review and security studies, the paper highlights the dual challenge of maintaining U.S. technological leadership while mitigating geopolitical risks and ethical concerns. The findings underscore the need for a coherent AI governance strategy that safeguards national interests, promotes responsible innovation, and supports a stable international AI order.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.