Generative AI and the urban AI policy challenges ahead: Trustworthy for whom?

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.

Similar Papers
  • Research Article
  • 10.1108/dts-08-2025-0255
User readiness and technology adoption in AI-driven smart cities: a systematic review of generative and predictive models for advancing the SDGs
  • Dec 4, 2025
  • Digital Transformation and Society
  • Nuning Kristiani + 3 more

Purpose This study examines the integration of generative and predictive artificial intelligence (AI) models within smart cities, focusing on how user readiness and technology adoption influence their contribution to sustainable urban development and governance. Design/methodology/approach The study applies a systematic literature review following PRISMA guidelines and synthesizes evidence from 50 peer-reviewed studies (2018–2025) indexed in Scopus and Web of Science. It combines bibliometric mapping using VOSviewer with thematic analysis to examine the drivers, barriers and governance mechanisms shaping the adoption of generative, predictive and hybrid applications in urban contexts. Findings Generative AI fosters participatory engagement, citizen co-design and interactive simulations, advancing SDG 11 (Sustainable Cities and Communities) and SDG 4 (Quality Education) through enhanced digital literacy and inclusive planning. Predictive AI improves operational efficiency, forecasting accuracy and data-driven policymaking, supporting SDG 9 (Industry, Innovation and Infrastructure) and SDG 13 (Climate Action) by promoting sustainable resource use and climate-resilient management. Hybrid AI integrates these strengths, addressing both social and operational aspects of smart city development and aligning with SDG 17 (Partnerships for the Goals) through cross-sector collaboration and shared governance. Collectively, these models contribute to broader sustainability goals, including SDGs 3, 7 and 12. Research limitations/implications This review acknowledges several key limitations. Reliance on Scopus and Web of Science may exclude regionally significant or domain-specific studies not indexed in these databases. The focus on English-language publications introduces potential language bias, possibly overlooking relevant research from non-English-speaking regions. Restricting the timeframe to 2018–2025 captures recent developments but may omit earlier foundational work or the most recent studies not yet indexed. Differences in research design, policy contexts and sample characteristics also affect comparability and limit generalizability. Future research should broaden data sources, include multilingual literature and adopt mixed-methods and longitudinal approaches to enhance contextual diversity and empirical robustness. Practical implications The findings provide practical guidance for policymakers, urban planners and technology developers to design AI governance systems that are transparent, accountable and aligned with the SDGs. Integrating generative and predictive AI can enhance operational efficiency, support participatory planning and promote responsible decision-making. The findings inform the development of adaptive policy frameworks that advance SDG 9 (Industry, Innovation and Infrastructure), SDG 11 (Sustainable Cities and Communities) and SDG 13 (Climate Action) through digital literacy initiatives, cross-sector collaboration and data-informed management. Strengthening these practices enables cities to translate AI’s potential into tangible contributions to inclusive and sustainable urban transformation. Social implications Integrating user readiness and digital literacy into AI adoption is essential for building inclusive and trustworthy smart cities. These efforts support SDG 4 (Quality Education), SDG 10 (Reduced Inequalities) and SDG 16 (Peace, Justice and Strong Institutions). Generative AI encourages citizen participation and collaborative planning, while predictive AI improves service accessibility and data-informed governance. Promoting ethical awareness and community engagement helps narrow digital divides and address bias. Collectively, these elements advance SDG 11 (Sustainable Cities and Communities) and SDG 17 (Partnerships for the Goals) by fostering socially responsive and transparent AI-driven urban development. Originality/value This review is among the first to integrate perspectives on user readiness and technology adoption with comparative insights into generative and predictive AI in smart cities. It advances understanding of how AI-driven urban innovation supports inclusivity, efficiency and sustainability, while outlining policy directions and a future research agenda for equitable and transparent AI governance.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 31
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).

  • Research Article
  • Cite Count Icon 25
  • 10.17705/1thci.00130
ICIS 2019 SIGHCI Workshop Panel Report: Human– Computer Interaction Challenges and Opportunities for Fair, Trustworthy and Ethical Artificial Intelligence
  • Jan 1, 2020
  • AIS Transactions on Human-Computer Interaction
  • Lionel P Robert + 2 more

Artificial Intelligence (AI) is rapidly changing every aspect of our society—including amplifying our biases. Fairness, trust and ethics are at the core of many of the issues underlying the implications of AI. Despite this, research on AI with relation to fairness, trust and ethics in the information systems (IS) field is still scarce. This panel brought together academia, business and government perspectives to discuss the challenges and identify potential solutions to address such challenges. This panel report presents eight themes based around the discussion of two questions: (1) What are the biggest challenges to designing, implementing and deploying fair, ethical and trustworthy AI?; and (2) What are the biggest challenges to policy and governance for fair, ethical and trustworthy AI? The eight themes are: (1) identifying AI biases; (2) drawing attention to AI biases; (3) addressing AI biases; (4) designing transparent and explainable AI; (5) AI fairness, trust, ethics: old wine in a new bottle?; (6) AI accountability; (7) AI laws, policies, regulations and standards; and (8) frameworks for fair, ethical and trustworthy AI. Based on the results of the panel discussion, we present research questions for each theme to guide future research in the area of human–computer interaction.

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • 10.1163/22112987-bja00004
AI Governance in Saudi Arabia: Cultural Values and Ethical AI Regulations in Comparative Perspective
  • Apr 10, 2025
  • Yearbook of Islamic and Middle Eastern Law Online
  • Beata Polok + 1 more

This country survey examines Saudi Arabia’s approach to artificial intelligence (AI) governance, focusing on the regulatory and ethical frameworks that shape its AI ecosystem. The study situates Saudi Arabia’s AI policies within the broader context of Vision 2030, emphasising the role of the Saudi Data and Artificial Intelligence Authority (SDAIA) in developing guidelines for AI ethics and generative AI applications. The Kingdom’s AI strategy is characterised by a balance between cultural values, international AI ethics standards, and economic development goals. Unlike rigid regulatory models, Saudi Arabia’s AI governance adopts a flexible, principle-based approach, incorporating voluntary compliance incentives such as motivational badges. The survey also contrasts Saudi Arabia’s AI governance with other major regulatory models, including those of the European Union, the United States, and China. The findings highlight the Kingdom’s goal to position itself as a global AI hub while ensuring alignment with national priorities and ethical considerations.

  • Research Article
  • Cite Count Icon 9
  • 10.1093/polsoc/puaf001
Governance of Generative AI
  • Jan 4, 2025
  • Policy and Society
  • Araz Taeihagh

The rapid and widespread diffusion of generative artificial intelligence (AI) has unlocked new capabilities and changed how content and services are created, shared, and consumed. This special issue builds on the 2021 Policy and Society special issue on the governance of AI by focusing on the legal, organizational, political, regulatory, and social challenges of governing generative AI. This introductory article lays the foundation for understanding generative AI and underscores its key risks, including hallucination, jailbreaking, data training and validation issues, sensitive information leakage, opacity, control challenges, and design and implementation risks. It then examines the governance challenges of generative AI, such as data governance, intellectual property concerns, bias amplification, privacy violations, misinformation, fraud, societal impacts, power imbalances, limited public engagement, public sector challenges, and the need for international cooperation. The article then highlights a comprehensive framework to govern generative AI, emphasizing the need for adaptive, participatory, and proactive approaches. The articles in this special issue stress the urgency of developing innovative and inclusive approaches to ensure that generative AI development is aligned with societal values. They explore the need for adaptation of data governance and intellectual property laws, propose a complexity-based approach for responsible governance, analyze how the dominance of Big Tech is exacerbated by generative AI developments and how this affects policy processes, highlight the shortcomings of technocratic governance and the need for broader stakeholder participation, propose new regulatory frameworks informed by AI safety research and learning from other industries, and highlight the societal impacts of generative AI.

  • Single Book
  • 10.62311/nesx/97891
Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems
  • Mar 14, 2025
  • Murali Krishna Pasupuleti

Abstract: As Artificial Intelligence (AI) advances, so do the risks associated with deepfakes, misinformation, and algorithmic bias, posing significant threats to security, privacy, democracy, and societal trust. "Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems" provides a comprehensive analysis of AI security vulnerabilities, adversarial machine learning, AI-driven misinformation, and bias in automated decision-making. The book explores how AI-generated synthetic media, data poisoning attacks, and biased algorithms are being weaponized for cyber fraud, political manipulation, and unethical automation. It delves into defensive strategies, AI forensic tools, cryptographic AI verification, and fairness-aware machine learning techniques to combat these emerging threats. Additionally, the book examines global AI regulations, governance frameworks, and ethical deployment standards that ensure transparency, accountability, and security in AI-driven ecosystems. Through real-world case studies, technical insights, and policy recommendations, this book serves as an essential resource for AI researchers, cybersecurity professionals, policymakers, and technology leaders aiming to develop trustworthy AI systems that resist adversarial manipulation, misinformation campaigns, and algorithmic bias while fostering fair, transparent, and secure AI adoption. Keywords: AI security, adversarial machine learning, deepfake detection, AI-generated misinformation, synthetic media, bias mitigation, AI ethics, AI governance, trustworthy AI, explainable AI (XAI), fairness-aware machine learning, cryptographic AI, federated learning security, digital forensics, algorithmic bias, data poisoning attacks, model robustness, cybersecurity in AI, misinformation detection, deep learning security, AI regulatory policies, zero-trust AI, blockchain-based content verification, ethical AI deployment, secure AI frameworks, AI transparency, AI-driven cyber threats, fake news detection, AI fraud prevention.

  • Research Article
  • Cite Count Icon 320
  • 10.1016/j.inffus.2023.101896
Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation
  • Jun 23, 2023
  • Information Fusion
  • Natalia Díaz-Rodríguez + 5 more

Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation

  • Research Article
  • Cite Count Icon 324
  • 10.4018/jdm.2020040105
Artificial Intelligence (AI) Ethics
  • Apr 1, 2020
  • Journal of Database Management
  • Keng Siau + 1 more

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?

  • Research Article
  • 10.2478/picbe-2025-0191
Global Perspectives on Digital and AI Legislation: A Comparative Study of Data Protection, AI Governance, and Healthcare Innovations with a Focus on Romania
  • Jul 1, 2025
  • Proceedings of the International Conference on Business Excellence
  • Cristian Constantin Francu + 1 more

Digital and artificial intelligence (AI) technologies are reshaping governance, requiring adaptive regulatory frameworks to ensure data privacy, digital identity security, and AI ethics. This study examines global approaches to AI and data governance, focusing on the European Union’s (EU) General Data Protection Regulation (GDPR) and AI Act, compared to regulatory structures in the United States (US). Romania serves as a case study to assess national implementation challenges and sector-specific impacts, particularly in healthcare. Using a mixed-methods approach combining legislative analysis, comparative study, and sectoral case examination, the research highlights key takeaways: Romania’s progress in AI-driven healthcare solutions, the necessity of tailored digital infrastructure investments, and the role of government ordinances in ensuring compliance. Policy recommendations emphasize public-private collaboration, regulatory adaptation, and targeted sectoral strategies to enhance Romania’s AI governance while aligning with EU standards.

  • Research Article
  • 10.34190/icair.5.1.4129
Bridging the AI Governance Gap: Ethical and Regulatory Imperatives for Generative AI in Nigeria
  • Dec 4, 2025
  • International Conference on AI Research
  • Oluwatayofunmi Durodola

As generative artificial intelligence (AI) technologies—such as ChatGPT, DALL·E, and other large language and image models—become increasingly mainstream, they introduce new ethical, legal, and governance challenges that are particularly urgent in developing countries. Nigeria, Africa’s most populous nation and a regional technology hub, offers a compelling case study of how these technologies are being adopted in environments with minimal regulatory infrastructure and limited public awareness. This paper examines the ethical and societal implications of generative AI in Nigeria and interrogates the country's preparedness to manage these risks. Despite the creation of the National Centre for Artificial Intelligence and Robotics (NCAIR) in 2020 and the recent passage of legislation such as the Nigeria Data Protection Act (2023) and the Startup Act (2022), Nigeria lacks a unified national AI formal risk classification systems, or sector-specific ethical guidelines. These gaps are important given the widespread, unregulated use of generative AI tools in education, politics, and digital commerce. In higher education, students increasingly rely on generative AI for assignments and projects, raising concerns about academic integrity in a system already strained by infrastructural deficits. Meanwhile, in the political domain, deepfake videos and AI-generated misinformation have circulated in election periods, threatening democratic stability in a media world prone to disinformation and weak content regulation. The paper compares Nigeria’s regulatory trajectory with global trends, particularly the European Union’s Artificial Intelligence Act and similar initiatives in Kenya, South Africa, and Rwanda. It highlights how Nigeria’s reactive approach to AI governance contrasts sharply with more proactive global models. Sectoral analysis reveals risks including digital labour displacement, cultural misrepresentation through foreign-trained models, algorithmic bias, and the erosion of public trust. Ultimately, the study calls attention to Nigeria’s urgent need for a comprehensive, context-sensitive AI ethics and governance framework. Through an analysis grounded in local realities and informed by global comparisons, the paper contributes to broader conversations about equitable, responsible AI adoption in the Global South.

  • Research Article
  • Cite Count Icon 6
  • 10.2139/ssrn.3873097
Artificial Intelligence and Corporate Social Responsibility: Employees’ Key Role in Driving Responsible Artificial Intelligence at Big Tech
  • Jan 1, 2021
  • SSRN Electronic Journal
  • Susan Von Struensee

Artificial Intelligence and Corporate Social Responsibility: Employees’ Key Role in Driving Responsible Artificial Intelligence at Big Tech

  • Research Article
  • Cite Count Icon 7
  • 10.21037/qims-24-723
A literature review of artificial intelligence (AI) for medical image segmentation: from AI and explainable AI to trustworthy AI.
  • Dec 1, 2024
  • Quantitative imaging in medicine and surgery
  • Zixuan Teng + 10 more

Medical image segmentation is a vital aspect of medical image processing, allowing healthcare professionals to conduct precise and comprehensive lesion analyses. Traditional segmentation methods are often labor intensive and influenced by the subjectivity of individual physicians. The advent of artificial intelligence (AI) has transformed this field by reducing the workload of physicians, and improving the accuracy and efficiency of disease diagnosis. However, conventional AI techniques are not without challenges. Issues such as inexplicability, uncontrollable decision-making processes, and unpredictability can lead to confusion and uncertainty in clinical decision-making. This review explores the evolution of AI in medical image segmentation, focusing on the development and impact of explainable AI (XAI) and trustworthy AI (TAI). This review synthesizes existing literature on traditional segmentation methods, AI-based approaches, and the transition from conventional AI to XAI and TAI. The review highlights the key principles and advancements in XAI that aim to address the shortcomings of conventional AI by enhancing transparency and interpretability. It further examines how TAI builds on XAI to improve the reliability, safety, and accountability of AI systems in medical image segmentation. XAI has emerged as a solution to the limitations of conventional AI by providing greater transparency and interpretability, allowing healthcare professionals to better understand and trust AI-driven decisions. However, XAI itself faces challenges, including those related to safety, robustness, and value alignment. TAI has been developed to overcome these challenges, offering a more reliable framework for AI applications in medical image segmentation. By integrating the principles of XAI with enhanced safety and dependability, TAI addresses the critical need for TAI systems in clinical settings. TAI presents a promising future for medical image segmentation, combining the benefits of AI with improved reliability and safety. Thus, TAI is a more viable and dependable option for healthcare applications, and could ultimately lead to better clinical outcomes for patients, and advance the field of medical image processing.

  • Research Article
  • 10.59490/dgo.2025.937
The evolving AI regulation space
  • May 19, 2025
  • Conference on Digital Government Research
  • Nic Depaula + 4 more

As artificial intelligence (AI) technologies proliferate, the US federal government has oscillated on related executive orders, and no federal laws have addressed AI comprehensively. However, many states have passed legislations related to AI in the previous 5 years, and these laws are evolving and becoming more targeted, creating challenges and opportunities for government agencies. For this study, we compiled all passed and enacted legislations across the 50 US states in 2024 and examined them in terms of: domains; regulation of AI use in the public sector and industry; and novel topics and issues being addressed. In this preliminary analysis, we find that recent AI legislations are multiplying across US states, but unevenly. AI regulation across states continue to address various domains, including healthcare, education, and now also generative AI and AI-generated content. Legislations are expanding the role of the public sector in AI governance and AI policies, but issues of AI ethics, such as bias, are unevenly addressed across states, and few states have comprehensive AI governance frameworks.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.