AI Revolution: The Legal Battle Between Indonesia and the European Union to Protect Copyright from Artificial Intelligence
The global surge in generative Artificial Intelligence (AI) has triggered unprecedented legal complexities in copyright protection. This article examines how Indonesia and the European Union (EU) approach the challenges posed by AI driven content creation and potential copyright infringement. Through doctrinal and comparative legal analysis, this study explores regulatory frameworks, liability questions, and enforcement mechanisms in both jurisdictions. The analysis reveals that Indonesia's Copyright Law No. 28 of 2014 remains anthropocentric, lacking recognition of AI generated works and mechanisms for regulating AI training using copyrighted materials. By contrast, the EU has developed a more comprehensive approach through the EU Copyright Directive and the AI Act, which incorporates risk-based AI governance and explicit opt out rights for copyright holders. The study identifies significant regulatory asymmetries between the two jurisdictions and examines potential areas for legal development. Drawing on international frameworks such as the OECD AI Guidelines, this research suggests that Indonesia could benefit from adopting more anticipatory regulatory approaches similar to the EU's principle-based strategy. The findings indicate that proactive legal reforms are necessary to address emerging AI copyright challenges in developing legal systems. This study contributes to the growing body of comparative legal scholarship on AI governance and offers insights for policymakers navigating the intersection of artificial intelligence and intellectual property law.
- Research Article
2
- 10.1108/tg-08-2025-0240
- Dec 4, 2025
- Transforming Government: People, Process and Policy
Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.
- Research Article
- 10.56345/ijrdv12n3s111
- Dec 25, 2025
- Interdisciplinary Journal of Research and Development
Rapid digital transformation has resulted in a paradigm shift, creating the need for shaping effective AI (artificial intelligence) governance. AI governance encompasses laws, policies, frameworks, and practices at global, regional, national, and organizational levels. Evaluating the impact of AI requires addressing ethical considerations. The need for appropriate AI governance has been highlighted by the United Nations (UN), which has access to existing normative and policy instruments, such as international standards. Global efforts in the UN system regarding AI governance are grounded in international law.AI governance takes a pragmatic approach and is delivered through an ecosystem that includes research, development, coordination, monitoring, evaluation, capacity building, and stakeholder engagement. Effective AI governance is essential in shaping the future of AI. At the European Union (EU) level, significant progress has been made, particularly with the AI Act, which is the first-ever regulation specifically on AI. The AI Act follows a risk-based approach and categorizes AI systems into four risk levels. It applies not only to EU-based providers but also to those outside the EU, particularly when their AI inputs are used within the EU.As Europe’s geopolitical influence remains crucial, particularly in light of recent developments, AI governance must be addressed from a multi-stakeholder perspective. The aim of this paper is to identify the current challenges in shaping AI governance in Europe through this approach. The study employs a qualitative, case-study methodology, analyzing the roles and needs of key stakeholders, including governments, regulatory bodies, international institutions, AI engineers, ethicists, industry associations, and end-users.
- Research Article
24
- 10.1093/polsoc/puaf001
- Jan 4, 2025
- Policy and Society
The rapid and widespread diffusion of generative artificial intelligence (AI) has unlocked new capabilities and changed how content and services are created, shared, and consumed. This special issue builds on the 2021 Policy and Society special issue on the governance of AI by focusing on the legal, organizational, political, regulatory, and social challenges of governing generative AI. This introductory article lays the foundation for understanding generative AI and underscores its key risks, including hallucination, jailbreaking, data training and validation issues, sensitive information leakage, opacity, control challenges, and design and implementation risks. It then examines the governance challenges of generative AI, such as data governance, intellectual property concerns, bias amplification, privacy violations, misinformation, fraud, societal impacts, power imbalances, limited public engagement, public sector challenges, and the need for international cooperation. The article then highlights a comprehensive framework to govern generative AI, emphasizing the need for adaptive, participatory, and proactive approaches. The articles in this special issue stress the urgency of developing innovative and inclusive approaches to ensure that generative AI development is aligned with societal values. They explore the need for adaptation of data governance and intellectual property laws, propose a complexity-based approach for responsible governance, analyze how the dominance of Big Tech is exacerbated by generative AI developments and how this affects policy processes, highlight the shortcomings of technocratic governance and the need for broader stakeholder participation, propose new regulatory frameworks informed by AI safety research and learning from other industries, and highlight the societal impacts of generative AI.
- Research Article
- 10.32625/kjei.2024.34.209
- Nov 30, 2024
- Korean Society for European Integration
The European Union(EU) is on the verge of enacting the world's inaugural legislation pertaining to artificial intelligence (AI), the AI Act, which will serve to regulate the utilisation of AI. The EU AI Act will regulate the use of AI applications in accordance with four distinct risk levels. In particular, the use of AI technology in the highest risk category necessitates human supervision, while a “transparency obligation” is imposed on companies developing general-purpose AI. Furthermore, the legislation provides for the establishment of various governance structures at the EU AI national level. These structures are intended to facilitate the implementation of the law, foster collaboration and capacity building at the EU level, and convene stakeholders in the AI space. The governance structures include an AI Secretariat, a European AI Council, an Advisory Forum, and a Scientific Panel. The extensive use of AI in the media sector is closely related to the EU legislation and governance framework on AI. Furthermore, the media sector is particularly susceptible to the impact of technological advancements, which gives rise to concerns regarding the credibility of the information it disseminates. It is imperative to ascertain the relationship between AI governance and media accountability, particularly in light of the growing emphasis on media social accountability, for global media companies operating in Europe that utilize AI. Accordingly, this study examines the relationship between AI governance and media accountability, with a particular focus on the EU. The regulation of AI through the enactment of laws and the establishment of governance structures is a significant and necessary endeavor. However, the practical application of these regulatory measures in the context of the media presents a challenge. In this regard, it is possible to consider the potential for self-regulation by the media itself. In order to guarantee the reliability of media content created using AI, transparency and accountability will be essential. This will require the media to develop their own ethical guidelines and procedures to ensure transparency and reliability on an ongoing basis. Conversely, the EU AI legislation can also serve as a reference point for AI and other technologies in countries that require AI-related norms.
- Research Article
- 10.58733/imhfd.1624022
- Sep 30, 2025
- İstanbul Medeniyet Üniversitesi Hukuk Fakültesi Dergisi
The article examines the evolving intersection of privacy rights and artificial intelligence (AI) governance, focusing on the role of international soft law frameworks in shaping privacy protections. Recognizing privacy as a fundamental human right enshrined in various global and regional legal instruments, the study highlights its critical dimensions in the context of emerging AI technologies, which pose unique challenges and opportunities for data governance. The methodology includes an analytical review of significant legislative and policy frameworks, such as the European Union Artificial Intelligence Act, UN General Assembly Resolutions on AI and cybercrime, the UN Global Digital Compact, and the Council of Europe Draft Framework Convention on Artificial Intelligence. These frameworks are assessed for their principles and mechanisms aimed at embedding privacy protections throughout AI systems, emphasizing transparency, accountability, fairness, and international collaboration. Findings indicate growing integration of privacy considerations in AI governance through measures like privacy-by-design, risk management, and restrictions on mass surveillance and untargeted data scraping. Key provisions include robust data governance, transparency requirements, safeguards against discriminatory outcomes, and harmonized privacy standards via international cooperation. The study concludes that international soft law frameworks provide a crucial foundation for embedding privacy protections into AI systems, reflecting a global consensus on safeguarding this right amid technological advances. By harmonizing principles across jurisdictions, fostering multi-stakeholder engagement, and promoting ethical AI development, these initiatives support a human-centric approach to AI governance. The research offers insights for international policymakers to align AI innovation with fundamental rights.
- Research Article
- 10.22495/jgrv14i4siart16
- Dec 5, 2025
- Journal of Governance and Regulation
This study conducts a comparative analysis of artificial intelligence (AI) regulation in the European Union (EU) and the Association of Southeast Asian Nations (ASEAN), examining their governance frameworks, enforcement mechanisms, and regulatory impact. The EU AI Act (EU, 2024) establishes a legally binding, centralized regulatory model that prioritizes risk-based AI classification, strict compliance obligations, and human rights protections (Huang et al., 2024). In contrast, ASEAN follows a decentralized, voluntary governance approach, promoting flexibility, innovation, and industry self-regulation (Putra, 2024). The analysis highlights the trade-offs between regulatory stringency and innovation flexibility. The EU’s strict enforcement model ensures accountability and consumer protection but poses compliance burdens for businesses, potentially slowing AI adoption. Conversely, ASEAN’s market-driven approach fosters rapid AI deployment but raises concerns about regulatory fragmentation, ethical risks, and cross-border governance inconsistencies. These findings are crucial for policymakers and businesses navigating AI governance complexities. As AI continues to evolve globally, harmonizing regulatory approaches and establishing mutual recognition mechanisms between regions could enhance AI accountability while supporting innovation, shaping a more cohesive global AI governance landscape.
- Research Article
7
- 10.2139/ssrn.3882493
- Jan 1, 2021
- SSRN Electronic Journal
The received wisdom is that artificial intelligence (AI) is a competition between the US and China. In this chapter, the author will examine how the European Union (EU) fits into that mix and what it can offer as a ‘third way’ to govern AI. The chapter presents this by exploring the past, present and future of AI governance in the EU. Section 1 serves to explore and evidence the EU’s coherent and comprehensive approach to AI governance. In short, the EU ensures and encourages ethical, trustworthy and reliable technological development. This will cover a range of key documents and policy tools that lead to the most crucial effort of the EU to date: to regulate AI. Section 2 maps the EU’s drive towards digital sovereignty through the lens of regulation and infrastructure. This covers topics such as the trustworthiness of AI systems, cloud, compute and foreign direct investment. In Section 3, the chapter concludes by offering several considerations to achieve good AI governance in the EU.
- Research Article
1
- 10.33327/ajee-18-7.4-a000103
- Sep 2, 2024
- Access to Justice in Eastern Europe
Background: This study correlates the up-to-date ethical, functional and legal evaluations related to the management and governance of artificial intelligence (AI) under European Union (EU) law, particularly impacting the health data sector and medical standards as provided by the Artificial Intelligence Act within the Regulation adopted by the European Council in May 2024. The initial proposal for the management and governance of the AI sector was submitted in April 2021. Three years later, on 13 March 2024, the European Union Artificial Intelligence Act (EU AIA) was adopted by the European Parliament. Subsequently, on 21 May 2024, the Council adopted an innovative legislative framework that harmonises the standards and rules for AI regulation. This framework is set to take effect in May 2026, with the central objective of stimulating and motivating a fair, safe, legal single market that respects the principles of ethics and the fundamental rights of the human person. Methods: The current legal analysis focuses on the European Union’s new institutional governance involving a multistage approach to managing health data, ethical artificial intelligence, generative artificial intelligence and classification of types of AI by considering the degree of risk (e.g. artificial intelligence systems with limited risk and systems with high risk) and medical devices. It outlines the legal framework for AI regulation and governance in the EU by focusing on compliance with the previously adopted legislation in the Medical Devices Regulation (2017) and the In-Vitro Diagnostic Regulation (2017). The paper also examines the application of the newly adopted EU Artificial Intelligence Act in relation to national justice systems, previous EU regulations on medical devices and personal data protection regulation, and its correlation with the European Court of Human Rights jurisprudence. This opens up complex discussions related to judicial reform and access to justice. For this purpose, as a research objective, the legal analysis includes an innovative perspective following an integrative discussion on the latest legal reforms and regulations of the AI sector in Eastern Europe launched in 2024 with a special focus on the latest developments in the EU Candidate Countries namely Ukraine and the Republic of Moldova. Results and conclusions: The present research facilitates the exploration of the real benefits of managing innovative AI systems for medical data, research, and development, as well as within the medical technology industry.
- Research Article
- 10.1108/yc-10-2024-2303
- Aug 25, 2025
- Young Consumers
Purpose As generative artificial intelligence (AI) technologies continue to advance and become more prevalent in higher education, addressing the ethical concerns associated with their use is essential. This study emphasizes the need for robust AI governance as more young consumers increasingly use generative AI for various applications. This paper aims to examine the ethical challenges posed by generative AI and review the AI policies in higher education to regulate young consumers use of generative AI, focusing on the ethical use of AI from foundational principles to sustainable governance. Design/methodology/approach Through a content analysis of literature on generative AI policies in higher education published between 2020 and 2024, this research aims to explore a more holistic approach to integrating generative AI into the educational process. The analysis examines academic policies and governance framework from 28 journal papers regarding generative AI tools in higher education. Data were collected from publicly accessible sources, such as Scopus, Emerald Insights, ProQuest, Web of Science and ScienceDirect. Findings This study analyses ten elements of the governance framework to identify potential AI governance and policy setting, benefiting stakeholders aiming at enhancing the regulatory framework of generative AI use in higher education. The discussions indicate a generally balanced yet cautious approach to integrating generative AI technology, especially considering ethical issues, inherent limitations and data privacy concerns. Originality/value The findings contribute to ongoing discussions to strengthen universities’ responses to new academic challenges posed by the use of generative AI and promote high AI ethical standards across educational sectors.
- Research Article
- 10.1108/dts-08-2025-0255
- Dec 4, 2025
- Digital Transformation and Society
Purpose This study examines the integration of generative and predictive artificial intelligence (AI) models within smart cities, focusing on how user readiness and technology adoption influence their contribution to sustainable urban development and governance. Design/methodology/approach The study applies a systematic literature review following PRISMA guidelines and synthesizes evidence from 50 peer-reviewed studies (2018–2025) indexed in Scopus and Web of Science. It combines bibliometric mapping using VOSviewer with thematic analysis to examine the drivers, barriers and governance mechanisms shaping the adoption of generative, predictive and hybrid applications in urban contexts. Findings Generative AI fosters participatory engagement, citizen co-design and interactive simulations, advancing SDG 11 (Sustainable Cities and Communities) and SDG 4 (Quality Education) through enhanced digital literacy and inclusive planning. Predictive AI improves operational efficiency, forecasting accuracy and data-driven policymaking, supporting SDG 9 (Industry, Innovation and Infrastructure) and SDG 13 (Climate Action) by promoting sustainable resource use and climate-resilient management. Hybrid AI integrates these strengths, addressing both social and operational aspects of smart city development and aligning with SDG 17 (Partnerships for the Goals) through cross-sector collaboration and shared governance. Collectively, these models contribute to broader sustainability goals, including SDGs 3, 7 and 12. Research limitations/implications This review acknowledges several key limitations. Reliance on Scopus and Web of Science may exclude regionally significant or domain-specific studies not indexed in these databases. The focus on English-language publications introduces potential language bias, possibly overlooking relevant research from non-English-speaking regions. Restricting the timeframe to 2018–2025 captures recent developments but may omit earlier foundational work or the most recent studies not yet indexed. Differences in research design, policy contexts and sample characteristics also affect comparability and limit generalizability. Future research should broaden data sources, include multilingual literature and adopt mixed-methods and longitudinal approaches to enhance contextual diversity and empirical robustness. Practical implications The findings provide practical guidance for policymakers, urban planners and technology developers to design AI governance systems that are transparent, accountable and aligned with the SDGs. Integrating generative and predictive AI can enhance operational efficiency, support participatory planning and promote responsible decision-making. The findings inform the development of adaptive policy frameworks that advance SDG 9 (Industry, Innovation and Infrastructure), SDG 11 (Sustainable Cities and Communities) and SDG 13 (Climate Action) through digital literacy initiatives, cross-sector collaboration and data-informed management. Strengthening these practices enables cities to translate AI’s potential into tangible contributions to inclusive and sustainable urban transformation. Social implications Integrating user readiness and digital literacy into AI adoption is essential for building inclusive and trustworthy smart cities. These efforts support SDG 4 (Quality Education), SDG 10 (Reduced Inequalities) and SDG 16 (Peace, Justice and Strong Institutions). Generative AI encourages citizen participation and collaborative planning, while predictive AI improves service accessibility and data-informed governance. Promoting ethical awareness and community engagement helps narrow digital divides and address bias. Collectively, these elements advance SDG 11 (Sustainable Cities and Communities) and SDG 17 (Partnerships for the Goals) by fostering socially responsive and transparent AI-driven urban development. Originality/value This review is among the first to integrate perspectives on user readiness and technology adoption with comparative insights into generative and predictive AI in smart cities. It advances understanding of how AI-driven urban innovation supports inclusivity, efficiency and sustainability, while outlining policy directions and a future research agenda for equitable and transparent AI governance.
- Research Article
- 10.34190/icair.5.1.4129
- Dec 4, 2025
- International Conference on AI Research
As generative artificial intelligence (AI) technologies—such as ChatGPT, DALL·E, and other large language and image models—become increasingly mainstream, they introduce new ethical, legal, and governance challenges that are particularly urgent in developing countries. Nigeria, Africa’s most populous nation and a regional technology hub, offers a compelling case study of how these technologies are being adopted in environments with minimal regulatory infrastructure and limited public awareness. This paper examines the ethical and societal implications of generative AI in Nigeria and interrogates the country's preparedness to manage these risks. Despite the creation of the National Centre for Artificial Intelligence and Robotics (NCAIR) in 2020 and the recent passage of legislation such as the Nigeria Data Protection Act (2023) and the Startup Act (2022), Nigeria lacks a unified national AI formal risk classification systems, or sector-specific ethical guidelines. These gaps are important given the widespread, unregulated use of generative AI tools in education, politics, and digital commerce. In higher education, students increasingly rely on generative AI for assignments and projects, raising concerns about academic integrity in a system already strained by infrastructural deficits. Meanwhile, in the political domain, deepfake videos and AI-generated misinformation have circulated in election periods, threatening democratic stability in a media world prone to disinformation and weak content regulation. The paper compares Nigeria’s regulatory trajectory with global trends, particularly the European Union’s Artificial Intelligence Act and similar initiatives in Kenya, South Africa, and Rwanda. It highlights how Nigeria’s reactive approach to AI governance contrasts sharply with more proactive global models. Sectoral analysis reveals risks including digital labour displacement, cultural misrepresentation through foreign-trained models, algorithmic bias, and the erosion of public trust. Ultimately, the study calls attention to Nigeria’s urgent need for a comprehensive, context-sensitive AI ethics and governance framework. Through an analysis grounded in local realities and informed by global comparisons, the paper contributes to broader conversations about equitable, responsible AI adoption in the Global South.
- Research Article
37
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Book Chapter
3
- 10.1093/oxfordhb/9780197579329.013.56
- Apr 20, 2022
The received wisdom is that artificial intelligence (AI) is a competition between the U.S. and China. This chapter will examine how the European Union (EU) fits into that mix and what it can offer as a “third way” to govern AI. The chapter presents this by exploring the past, present, and future of AI governance in the EU. First, the chapter will explore and evidence the EU’s coherent and comprehensive approach to AI governance. In short, the EU ensures and encourages ethical, trustworthy and reliable technological development. This will cover a range of key documents and policy tools that lead to the most crucial effort of the EU to date: to regulate AI. Then, the chapter will map the EU’s drive towards digital sovereignty through the lens of regulation and infrastructure. This covers topics such as the trustworthiness of AI systems, cloud, compute, and foreign direct investment. Finally, the chapter concludes by offering several considerations to achieve good AI governance in the EU.
- Discussion
1
- 10.2147/jmdh.s541271
- Sep 1, 2025
- Journal of Multidisciplinary Healthcare
The application of generative artificial intelligence (AI) technology in the healthcare sector can significantly enhance the efficiency of China’s healthcare services. However, risks persist in terms of accuracy, transparency, data privacy, ethics, and bias. These risks are manifested in three key areas: first, the potential erosion of human agency; second, issues of fairness and justice; and third, questions of liability and responsibility. This study reviews and analyzes the legal and regulatory frameworks established in China for the application of generative AI in healthcare, as well as relevant academic literature. Our research findings indicate that while China is actively constructing an ethical and legal governance framework in this field, the regulatory system remains inadequate and faces numerous challenges. These challenges include lagging regulatory rules; an unclear legal status of AI in laws such as the Civil Code; immature standards and regulatory schemes for medical AI training data; and the lack of a coordinated regulatory mechanism among different government departments. In response, this study attempts to establish a governance framework for generative AI in the medical field in China from both legal and ethical perspectives, yielding relevant research findings. Given the latest developments in generative AI in China, it is necessary to address the challenges of its application in the medical field from both ethical and legal perspectives. This includes enhancing algorithm transparency, standardizing medical data management, and promoting AI legislation. As AI technology continues to evolve, more diverse technical models will emerge in the future. This study also proposes that to address potential risks associated with medical AI, efforts should be made to establish a global AI ethics review committee to promote the formation of internationally unified ethical and legal review mechanisms.
- Research Article
- 10.36690/2674-5216-2024-3-44-66
- Sep 30, 2024
- Public Administration and Law Review
The rise of artificial intelligence (AI) has fundamentally challenged traditional intellectual property (IP) frameworks, particularly in the European Union (EU), where regulatory efforts are aimed at balancing innovation with legal protections. AI’s ability to autonomously create, modify, and use IP raises complex questions about authorship, inventorship, ownership, and enforcement, which existing laws were not designed to handle. As EU countries attempt to adapt their legal systems to address these challenges, a comparative analysis of their regulatory acts is essential to understand how different member states are responding to the intersection of AI and IP protection. The aim of this article is to provide a comparative analysis of the regulatory frameworks governing IP protection in the context of AI across selected EU countries. By examining national legislation and harmonization efforts, the study seeks to identify common challenges, highlight divergent approaches, and offer insights into the evolving legal landscape of IP protection in the age of AI. The article employs a qualitative, comparative research methodology. It focuses on six EU countries—Germany, France, the Netherlands, Poland, Greece, and Romania—analyzing their IP laws concerning AI-related issues. The study reviews national regulations, EU directives, and case law to evaluate how each country addresses AI-generated IP in terms of ownership, authorship, patentability, trademark issues, and enforcement mechanisms. A thematic coding approach is used to identify key trends and divergences between member states. The analysis reveals that all EU countries maintain the requirement for human authorship and inventorship, which limits the legal recognition of fully autonomous AI-generated content. While countries like Germany, France, and the Netherlands have initiated discussions on potential legal reforms, others, such as Poland, Greece, and Romania, rely more heavily on existing frameworks and await further EU guidance. Additionally, enforcement mechanisms vary significantly, with more technologically advanced countries adopting AI-driven tools to monitor and enforce IP rights. As AI continues to evolve and play a larger role in creative and technical industries, the legal frameworks governing IP in the EU must adapt accordingly. Future regulatory efforts should focus on creating new categories for AI-generated works, investing in AI-powered enforcement tools, and ensuring greater harmonization across member states. By addressing these challenges proactively, the EU can strike a balance between fostering AI innovation and maintaining robust IP protections, positioning itself as a global leader in both technology and intellectual property rights.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.