The european 'post-digital' public sphere: foundations of an emerging paradigm in the social sciences
The present study investigates the evolution of public institutional communication in the European Union in the context of accelerating digital transformation. It introduces a conceptual framework for understanding the emergence of a ‘post-digital’ European public sphere, where digital technologies –rather than becoming obsolete– are deeply integrated into human-machine interactions. A key driver of this shift is generative artificial intelligence (AI), which increasingly mediates public discourse and governance processes. The research adopts a qualitative methodology based on expert interviews, examining how AI-driven systems are transforming institutional communication practices and reshaping citizen participation within the EU’s multilevel governance and regulatory environment. Findings show that EU institutions are progressively integrating AI tools, such as chatbots, into their communication strategies to enhance efficiency and citizen engagement. However, this transformation raises critical challenges, including algorithmic bias, transparency, ethical governance, and democratic accountability. The discussion addresses the epistemological implications of AI integration, highlighting how digital automation is influencing both theoretical approaches and research methodologies in the social sciences. The study contributes to a deeper understanding of the socio-technical dynamics underpinning the EU’s evolving public communication and the broader consequences of AI-driven governance in a post-digital context.
- Research Article
31
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).
- Research Article
62
- 10.1016/j.oneear.2022.02.004
- Mar 1, 2022
- One Earth
Scrutinizing environmental governance in a digital age: New ways of seeing, participating, and intervening
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
- 10.1108/tg-08-2025-0240
- Dec 4, 2025
- Transforming Government: People, Process and Policy
Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.
- Research Article
8
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
Getting AI Right: Introductory Notes on AI & Society
- Book Chapter
- 10.1108/s1548-643520230000020017
- Mar 13, 2023
Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters' suitability and application and disclaims any warranties, express or implied, to their use.
- Conference Article
10
- 10.3390/proceedings2022081136
- Apr 29, 2022
There is a growing debate on how to regulate and make responsible use of digital technologies, particularly artificial intelligence (AI). In an increasingly globalized scenario, power relations and inequalities between different countries and regions need to be addressed. While developed countries are leading the building of an ethical governance architecture for AI, in the so-called global south (e.g., countries with a post-colonial history, also called non-developing countries), their situation of vulnerability and dependence on northern domination leads them to import digital technology, capital and modes of organization from these developed countries. This imbalance, in the absence of an ethical reflection, can have a significantly negative impact on their already excluded, oppressed and discriminated populations. In this paper, we want to explore to what extent countries from the global south that import digital technology from developed countries may be affected if we do not take into account the need for multi-level and ethical global governance of AI from a human rights/democratic perspective. In particular, we want to address two problems that may arise: (a) Lack of governance capacity in southern populations resulting from their dependence from northern leadership on technological innovations and regulations, and (b) material and workforce extractivism inflicted by the northern countries on southern ones.
- Research Article
2
- 10.1093/anncom/wlaf005
- Jun 12, 2025
- Annals of the International Communication Association
Artificial intelligence (AI) has become pervasive in everyday life, and with the publication of large language models such as ChatGPT especially relevant in the context of media and public communication. This paper synthesizes the conceptualizations, types, methods, and evaluations of AI in public communication in the early phases of innovation adoption, before the broad public discussion around generative AI set in. We conducted a systematic review of empirical research focusing on AI in six social and computing-science databases up until and including 2022 (k = 198). Results show a steep increase in the number of studies published on AI in public communication in just four years. To facilitate a common understanding of what AI is and how it is studied, AI applications are grouped into four AI types: (a) AI as method, (b) Generative AI, (c) AI as communicator, and (d) AI generally. People’s interaction with and attitudes towards AI seem to be central in this research. In addition, AI has mostly been investigated with quantitative methods, often with human participants, as evidenced by the dominance of surveys and experiments. The reviewed research primarily comes from English-speaking countries and often fails to define what AI is or to formulate normative implications. Five blind spots of AI research in public communication and their implications for future empirical studies are discussed.
- Research Article
- 10.51702/esoguifd.1583408
- May 15, 2025
- Eskişehir Osmangazi Üniversitesi İlahiyat Fakültesi Dergisi
Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.
- Research Article
- 10.63501/jj9ksr56
- Jun 11, 2025
- INNOVAPATH
Artificial Intelligence (AI), Artificial General Intelligence (AGI), and other emerging technologies are significantly reshaping modern healthcare systems. Their integration across clinical, operational, and public health settings has already produced measurable improvements in diagnostic accuracy, treatment personalization, operational efficiency, and epidemic response. These technologies leverage vast amounts of data, advanced algorithms, and computational power to augment clinical decision-making, optimize workflows, and expand access to care. This manuscript explores the real-world applications of these technologies, drawing on recent literature and case studies to illustrate both their potential and limitations. Specific examples include AI-driven diagnostic imaging, predictive analytics for hospital management, and AI-based models for pandemic surveillance. It also addresses the growing use of AI in personalized medicine and the increasing incorporation of robotics, deep learning, natural language processing, edge computing, quantum computing, health information and learning technologies (HILT), digital twin systems, and neural networks in everyday clinical practice (Topol, 2019; Rajkomar et al., 2019; Esteva et al., 2017). The findings indicate that while AI and related innovations hold promise for revolutionizing care delivery, challenges related to algorithmic bias, data privacy, ethical governance, and regulatory oversight remain critical considerations. The disparity in access to these tools, particularly in low-resource settings, underscores the need for inclusive and equitable frameworks. A multi-stakeholder, ethical, and interdisciplinary approach is required to ensure these tools fulfill their transformative potential while safeguarding patient rights and promoting equitable healthcare outcomes worldwide. As the healthcare landscape evolves, the thoughtful integration of AI, AGI, and complementary technologies will be pivotal in achieving scalable, efficient, and patient-centered care delivery.
- Research Article
- 10.1152/advan.00119.2025
- Dec 1, 2025
- Advances in physiology education
As artificial intelligence (AI) is becoming more integrated into the field of healthcare, medical students need to learn foundational AI literacy. Yet, traditional, descriptive teaching methods of AI topics are often ineffective in engaging the learners. This article introduces a new application of cinema to teaching AI concepts in medical education. With meticulously chosen movie clips from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie, the students were introduced to the primary differences between artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI). This method triggered encouraging responses from students, with learners indicating greater conceptual clarity and heightened interest. Film as an emotive and visual medium not only makes difficult concepts easy to understand but also encourages curiosity, ethical consideration, and higher order thought. This pedagogic intervention demonstrates how narrative-based learning can make abstract AI systems more relatable and clinically relevant for future physicians. Beyond technical content, the method can offer opportunities to cultivate critical engagement with ethical and practical dimensions of AI in healthcare. Integrating film into AI instruction could bridge the gap between theoretical knowledge and clinical application, offering a compelling pathway to enrich medical education in a rapidly evolving digital age.NEW & NOTEWORTHY This article introduces a new learning strategy that employs film to instruct artificial intelligence (AI) principles in medical education. By introducing clips the from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie to clarify artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI), the approach converted passive learning into an emotionally evocative and intellectually stimulating experience. Students experienced enhanced comprehension and increased interest in artificial intelligence. This narrative-driven, visually oriented process promises to incorporate technical and ethical AI literacy into medical curricula with enduring relevance and impact.
- Research Article
236
- 10.1057/s41599-020-0494-4
- Jun 17, 2020
- Humanities and Social Sciences Communications
The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI. However, many AI researcher have pursued the aim of developing artificial intelligence that is in principle identical to human intelligence, called strong AI. Weak AI is less ambitious than strong AI, and therefore less controversial. However, there are important controversies related to weak AI as well. This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Although AGI may be classified as weak AI, it is close to strong AI because one chief characteristics of human intelligence is its generality. Although AGI is less ambitious than strong AI, there were critics almost from the very beginning. One of the leading critics was the philosopher Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. However, today one might argue that new approaches to artificial intelligence research have made his arguments obsolete. Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.
- Discussion
14
- 10.1016/s2214-109x(23)00037-2
- Jan 23, 2023
- The Lancet Global Health
AI telemedicine screening in ophthalmology: health economic considerations
- Research Article
2
- 10.3390/rel15010079
- Jan 9, 2024
- Religions
Humanistic Buddhism is one of the mainstreams of modern Buddhism, with special emphasis on the humanistic dimension. With the development of artificial intelligence (AI) technology, Humanistic Buddhism is also at an important stage of modernization and transformation, thus facing a continuous negotiation between religious values and technological innovations. This paper first argues that AI is technically beneficial to the propagation of Buddhism by citing several cases in which AI technology has been used in Buddhism. Then, by comparing Master Hsing Yun’s Buddhist ethics to “Posthuman” ethics, it points out that the theories of Humanistic Buddhism share similarities with AI and Posthuman ethics. Among them, Master Hsing Yun’s theory of “the nature of insentient beings” provides an important theoretical reference for the question of “whether AI can become a Buddha”. From the technical and ethical dimensions, it points out that the interaction between Humanistic Buddhism and AI can promote original uses or implementations of AI technology. However, it should also be noted that compared to the cases of “Artificial Narrow Intelligence”discussed in the paper, the “Strong AI” could lead to much more ethical crises. It is also likely to cause the cult of science and technology, and thus subvert the humanistic tradition of Buddhism with a new instrumental rationality. In addition, there are some potential pitfalls that Humanistic Buddhism may encounter when using AI. Hence, while it is necessary to encourage the use of technologies such as AI in contemporary Buddhism, it is also important for Buddhism to keep a critical distance from digital technologies.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.