Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Export
Sort by: Relevance
Public value positions and design preferences toward AI-based chatbots in e-government. Evidence from a conjoint experiment with citizens and municipal front desk officers

Developing a chatbot to handle citizen requests in a municipal office requires multiple design choices. We use public value theory to test how value positions shape these design choices. In a conjoint experiment, we asked German citizens (n = 1690) and front desk officers in municipalities (n = 267) to evaluate hypothetical chatbot designs that differ in their fulfillment of goals derived from different value positions: (1) maintaining security, privacy, and accountability, (2) improving administrative performance, and (3) improving user-friendliness and empathy. Experimental results show that citizens prefer chatbots programmed by domestic firms, value chatbots taking routine decisions excluding discretion, and strongly prefer human intervention when conversations fail. While altering the salience of public sector values through priming does not affect citizens' design choices consistently, we find systematic differences between citizens and front desk officers. However, these differences are qualitative rather than fundamental. We conclude that citizens and front desk officers share public values that provide a sufficient basis for chatbot designs that overcome a potential legitimacy gap of AI in citizens-state service encounters.

Read full abstract
Just Published
Open Government Data (OGD) as a catalyst for smart city development: Empirical evidence from Chinese cities

While existing smart city models recognize the importance of data, they often overlook the specific role of Open Government Data (OGD) for urban development. This study addresses this gap by adapting the Smart City Model to explicitly include OGD as a critical component. Drawing on panel data from the 2022–2024 Chinese Cities Digitalization Evolution Index, we employ Structural Equation Modeling (SEM) to empirically examine the direct and indirect effects of OGD, digital infrastructure, and digital economy on smart city development. Our analysis identifies four key pathways, revealing that while digital infrastructure positively influences smart city development directly, the indirect pathways incorporating OGD demonstrate stronger effects. OGD plays a pivotal role by significantly enhancing the digital economy and digital infrastructure, as well as directly contributing to smart city development. This research contributes to the smart city literature by moving beyond discussions of individual components to empirically test the relationships between these elements. By positioning OGD as a catalyst, we provide a nuanced understanding of the mechanisms through which data-driven initiatives empower smart city development. Our findings offer valuable insights into the multifaceted ways OGD serves as a driving force for urban innovation, challenging the traditional view of government data as a passive resource. This study highlights the importance of OGD as a strategic asset for policymakers seeking to harness the potential of data-driven urban governance. We conclude with policy recommendations for leveraging OGD to support sustainable and efficient smart city development.

Read full abstract
Just Published
Regulating generative AI: The limits of technology-neutral regulatory frameworks. Insights from Italy's intervention on ChatGPT

Existing literature has predominantly concentrated on the legal, ethical, governance, political, and socioeconomic aspects of AI regulation, often relegating the technological dimension to the periphery, reflecting the design, use, and development of AI regulatory frameworks that are technology-neutral. The emergence and widespread use of generative AI models present new challenges for public regulators aiming at implementing effective regulatory interventions. Generative AI operates on distinctive technological properties that require a comprehensive understanding prior to the deployment of pertinent regulation. This paper focuses on the recent case of the suspension of ChatGPT in Italy to explore the impact the specific technological fabric of generative AI has on the effectiveness of technology-neutral regulation. By drawing on the findings of an exploratory case study, this paper contributes to the understanding of the tensions between the specific technological features of generative AI and the effectiveness of a technology-neutral regulatory framework. The paper offers relevant implications to practice arguing that until this tension is effectively addressed, public regulatory interventions are likely to underachieve their intended objectives.

Read full abstract
Just Published
A more secure framework for open government data sharing based on federated learning

Open government data, abbreviated as OGD, attracts significant public interest with substantial social value recently, which enables the government to make more accurate and efficient decisions based on real and comprehensive data. It also helps break down information silos, improve service quality and management efficiency, and enhance public trust in government activities. This is crucial for advancing public management modernization, fostering technological innovation, and strengthening governance capabilities. The focus of this study is how to solve the problem of more secure sharing of OGD. And we developed a more secure framework for open government data sharing based on federated learning. Inspired by the government data authorization operation model, this framework includes four categories of participants: OGD providers, OGD collectors, OGD operators, and OGD users. We further analyzed modeling techniques for horizontal federated learning, vertical federated learning, and federated transfer learning. By applying this framework to typical scenarios in China, its actual effectiveness has been illustrated in preventing information leakage, protecting data privacy, and improving model security, providing more reliable and efficient solutions for government governance and public services. Future research can continuously explore the application of privacy-computing-related technologies in secure sharing of OG to further enhance data security and the potential of OGD.

Read full abstract
Does trust in government moderate the perception towards deepfakes? Comparative perspectives from Asia on the risks of AI and misinformation for democracy

There have recently been growing global concerns about misinformation, and more specifically about how deepfake technologies have been used to run disinformation campaigns. These concerns, in turn, have influenced people's perceptions of deepfakes, often associating them with threats to democracy and fostering less positive views. But does high trust in government mitigate these influences, thereby strengthening positive perceptions of deepfakes? In a cross-national survey conducted in Malaysia, Singapore, and India, we found no evidence of a negative association either between concern about the spread of misinformation online or perceived risks of AI to democracy, with positive attitudes towards deepfakes. However, when accounting for the moderating factor of trust in government, respondents in Singapore who have high trust levels exhibited more positive attitudes towards deepfakes, despite their concerns about misinformation. Similarly, higher trust in government correlated with more favorable perceptions of deepfakes even among those who view AI as a risk to democracy; this effect is evident across all three countries. In the conclusion, we spell out the implications of these findings for politics in Asia and beyond.

Read full abstract
Transforming towards inclusion-by-design: Information system design principles shaping data-driven financial inclusiveness

Digitalization and datafication of financial systems result in more efficiency, but might also result in the exclusions of certain groups. Governments are looking for ways to increase inclusions and leave no one behind. For this, they must govern an organizational ecosystem of public and private parties. We derive value-based requirements through a systematic research methodology and iteratively refine design principles for achieving inclusivity goals. This refinement process is enriched by interviews with field experts, leading to the formulation of key Design principles: the essential role of inclusive metrics, leveraging alternative data sources, ensuring transparency in loan processes and the ability for decision contestation, providing tailored credit solutions, and maintaining long-term system sustainability. The government's role is to ensure a level playing field where all parties have equal access to the data. Following the principles ensures that exclusion and discrimination become visible and can be avoided. This study underscores the necessity for system-level transformations, inclusion-by-design, and advocacy for a new system design complemented by regulatory updates, new data integration, inclusive AI, and organizational collaborative shifts. These principles can also be used in different data-driven governance situations.

Read full abstract
Open Access
Bridging the gap: Towards an expanded toolkit for AI-driven decision-making in the public sector

AI-driven decision-making systems are becoming instrumental in the public sector, with applications spanning areas like criminal justice, social welfare, financial fraud detection, and public health. While these systems offer great potential benefits to institutional decision-making processes, such as improved efficiency and reliability, these systems face the challenge of aligning machine learning (ML) models with the complex realities of public sector decision-making. In this paper, we examine five key challenges where misalignment can occur, including distribution shifts, label bias, the influence of past decision-making on the data side, as well as competing objectives and human-in-the-loop on the model output side. Our findings suggest that standard ML methods often rely on assumptions that do not fully account for these complexities, potentially leading to unreliable and harmful predictions. To address this, we propose a shift in modeling efforts from focusing solely on predictive accuracy to improving decision-making outcomes. We offer guidance for selecting appropriate modeling frameworks, including counterfactual prediction and policy learning, by considering how the model estimand connects to the decision-maker's utility. Additionally, we outline technical methods that address specific challenges within each modeling approach. Finally, we argue for the importance of external input from domain experts and stakeholders to ensure that model assumptions and design choices align with real-world policy objectives, taking a step towards harmonizing AI and public sector objectives.

Read full abstract
Open Access