The Principle of Transparency in the Context of Legal Regulation of Artificial Intelligence Systems
Ukraine continues to develop its own approach to the legal regulation of artificial intelligence, with due regard to the international standards developed to date. One of the objectives of the proposed legal regulation is to ensure it remains balanced and flexible so that it does not hinder further technological developments, while also protecting the rights of individuals who might be affected by AI systems, including data subject rights, insofar as AI systems involve certain personal data processing operations. For these purposes, a theoretical and legal discussion regarding the main aspects of AI regulation at the European Union level — currently the leading authority in this field — is important. Its risk-based regulatory framework and existing transparency requirements should be properly considered in Ukraine, especially in light of Ukraine’s commitment to implement the EU acquis in accordance with the EU-Ukraine Association Agreement.The discourse on AI regulation in the EU and beyond demonstrates that AI technologies—their development and subsequent use—must be transparent, with transparency often construed broadly. Notably, transparency itself is an important safeguard against AI-related abuses and a prerequisite for implementing other key principles of AI regulation, particularly accountability and responsibility on the part of AI system developers and deployers. At the same time, transparency is a complex requirement, and its application requires consideration of the peculiarities of AI technology itself and existing legislative limitations, including those relating to personal data protection and the protection of intellectual property rights.The article explores the prerequisites for the emergence of the transparency requirement for AI systems, confirms the importance of transparency in the context of AI regulation, and examines the theoretical and legal aspects of this principle and its transformation into regulatory requirements at the EU level.
- Research Article
- 10.24144/2307-3322.2023.80.1.63
- Jan 22, 2024
- Uzhhorod National University Herald. Series: Law
In the preamble to the Constitution, Ukraine proclaimed its course towards European integration, which, in particular, includes cooperation in the field of information human rights protection - protection of personal data on the basis of approximation of national legislation to the strictest EU regulations.The article analyzes the concept of information human rights in the context of the rule of law, especially in the period of digital transformation. The main focus is on the actual enforcement of citizens’ rights and freedoms, including information rights. The article analyzes the state of research in the field of information law, in particular, the concepts of “information rights”, “rights to information”, “information freedoms”, as well as approaches to understanding information rights from the standpoint of various branches of law. The authors come to the conclusion that there are studies of the content of the relevant categories, but there is no study of the rights of data subjects themselves. The article examines the content of human information rights as a component of the legal status of a person in the State on the example of legal regulation of protection of the rights of personal data subjects in Ukrainian legislation and in the legal provisions of the European Union. When comparing the content of legal approaches to the regulation of common ideas in the presence of similar lexical constructions, the differences are revealed in the “spirit” of legal regulation: the European approach is characterized by anthropocentricity, human interests are the priority of regulation, and the grounds for exercising many of the rights of data subjects are their will, unlike national legislation, where the grounds for the emergence and exercise of some rights of personal data subjects are the facts of violation of other rights of these subjects. The article contains an analysis of the existing categories in the field of information rights, and provides a detailed analysis of the content of the elements of the concept of “personal data”, in particular, the concepts of: “human identification”, “identifiers in real life” and “identifiers in digital space”. The study suggests that the transformation of Ukrainian legislation in the field of personal data protection is still ongoing, which opens the way for further research in this area.
- Research Article
- 10.47475/2311-696x-2025-45-2-165-170
- Jul 7, 2025
- LEGAL ORDER: History, Theory, Practice
This article analyzes the results of the discussion on the issues of ethical, legal and criminal regulation of artificial intelligence, which took place within the framework of the XIII St. Petersburg International Legal Forum. Special attention is paid to the key conclusions of the panel discussion “Ethics of Artificial Intelligence”, which demonstrated the need for an integrated approach to regulating AI technologies. The research covers the fundamental ethical principles of the development and application of artificial intelligence, the problems of the distribution of responsibility between the participants of the AI system, as well as the formation of new types of crimes in the digital age. The authors analyze the expert consensus on the need to harmonize approaches to the ethical and legal regulation of artificial intelligence while preserving the cultural specifics of national legal systems, and consider the prospects for the development of compensation mechanisms and liability insurance in the field of AI systems.
- Research Article
2
- 10.46398/cuestpol.4072.40
- Mar 7, 2022
- Cuestiones Políticas
The aim of the article is to study various approaches to legal regulation of AI artificial intelligence and robotic systems in the European Union, USA, and China. These regions are the world's largest centers of technological development and therefore each of them has perfected a unique approach to legal regulation on the limits, scopes, and proper uses of AI. His achievements are widely used by other countries. The authors used the methods of analysis of scientific documents, laws, and legal regulations. In addition, this article reviews the basic conceptual approaches available in the world for the formation of legal regulation in the field of the use of AI and robotic systems. It is concluded that policies regulating artificial intelligence are not limited to one area and, in general, are intended to protect the rights and freedoms of citizens, regardless of the field of application of AI in the social order.
- Book Chapter
- 10.1007/978-3-031-18275-4_2
- Nov 23, 2022
This chapter gives an AI overview by starting to list and compare different terms and definitions in Sect. 2.1. It shows, how important it is to clarify terms beforehand, as there are not always unique definitions and understanding by humans about AI. It highlights that an AI system does not represent a robot or physical machine. The software can be operated on different kinds of hardware systems or platforms but does not require to have any physical shape. Section 2.2 gives an overview of different AI technology aspects. It shows the differences in data computation and progress between humans and computer systems. Section 2.3 describes several AI technologies and highlights the challenge of generated bias. If the AI algorithm gets trained with an already biased data set whether that can happen with consciousness or not, it will not generate a “better human.” The next Sect. 2.4 lists some AI challenges. A current main challenge is that humans still see AI systems as robots or machines and think of physical threats or opportunities. As AI systems can be only software modules, it is recommended to shift the broad society discussions from an attacking Terminator robot to broader ethics and responsibility discussion. Every citizen should take responsibility and proactive actions to shape “Friendly AI” systems. Like every technology, also AI can get abused and used in a not general friendly way. Section 2.5 describes opportunities and risks of AI technology. As humans stopped being goal driven, Section 2.6 highlights the scenario, where future AI systems might overtake the definition of goals for humans. To generate a broad awareness within society to develop and implement AI systems in daily life, it needs acceptance, transparency, and understanding. Section 2.7 gives an overview of a humans-based classification of evolution.
- Conference Article
4
- 10.1145/3462757.3466099
- Jun 21, 2021
The article examines whether the current product liability law provides an appropriate regulation for AI systems. This question, which is discussed at the example of the European Product Liability Directive, is of great practical importance in the current legal policy discussion on liability for AI systems. This article demonstrates that in principle the liability requirements are also applicable to AI systems. If the conduct of an AI system is carefully distinguished from its properties, excessive liability can be avoided. To reverse the burden of proof in favour of the injured party in the case of faulty behaviour enables a liability regime that is fair to the interests at stake. However, product liability law only applies if AI systems lead directly to personal injury or damage to property. Product liability law is not applicable insofar as AI systems indirectly lead to considerable disadvantages for the person concerned, in particular through assessments of persons. Protection against discrimination or otherwise unfair assessments by AI systems shall be effected by other legal instruments.
- Research Article
- 10.1007/s00146-023-01661-w
- May 4, 2023
- AI & SOCIETY
Ethical AI does not have to be like finding a black cat in a dark room
- Research Article
43
- 10.1016/j.clsr.2023.105871
- Sep 12, 2023
- Computer Law & Security Review
The European AI liability directives – Critique of a half-hearted approach and lessons for the future
- Research Article
1
- 10.1007/s43681-023-00327-z
- Aug 30, 2023
- AI and Ethics
This paper presents an initial exploration of the concept of AI system recall, primarily understood as a last resort when AI systems violate ethical norms, societal expectations, or legal obligations. The discussion is spurred by recent incidents involving notable AI systems, demonstrating that AI recalls can be a very real necessity. This study delves into the concept of product recall as traditionally understood in industry and explores its potential application to AI systems. Our analysis of this concept is centered around two prominent categories of recall drivers in the AI domain: ethical-social and legal considerations. In terms of ethical-social drivers, we apply the innovative notion of “moral Operational Design Domain”, suggesting AI systems should be recalled when they violate ethical principles and societal expectation. In addition, we also explore the recall of AI systems from a legal perspective, where the recently proposed AI Act provides regulatory measures for recalling AI systems that pose risks to health, safety, and fundamental rights. The paper also underscores the need for further research, especially around defining precise ethical and societal triggers for AI recalls, creating an efficient recall management framework for organizations, and reassessing the fit of traditional product recall models for AI systems within the AI Act's regulatory context. By probing these complex intersections between AI, ethics, and regulation, this work aims to contribute to the development of robust and responsible AI systems while maintaining readiness for failure scenarios.
- Research Article
- 10.22545/2024/00244
- Jan 9, 2024
- Transdisciplinary Journal of Engineering & Science
A critical look at the evolution of AI strongly shows a sustained but stealth race to replace humans with AI. Early scientific literature and discourse on AI for some reasons (either to allow AI gain entry and acceptability in mainstream scientific and technological arena) vehemently deny this "human replacement agenda". This thinking pattern unknowingly shaped current scientific literature, discourse, general understanding of what AI is and its development and applicability (a reductionist thinking). This limits our understanding in both the beneficial and destructive capabilities of AI. But when considering a TD assessment on the developmental dynamic of AI, one would comfortably say and must be bold to admit, that indeed AI intends replacing humans and is on course for fulfilling this.
 
 We see this AI human replacement agenda in intensified R&D efforts dedicated to developing powerful AI system of systems which massively augment human reasoning, most times far better. The inexhaustible list includes the AI replacement of formerly considered human-centric jobs, advanced autonomous weapon systems, killer robots and AI in warfare, intelligent facial recognition, biometric monitoring, integrating AI on biological, nuclear and space-based weapons systems, etc. If this is the direction AI is taking, then a secondary aim would surely arrive at integrating epistemology in AI or "grant spirits" for AI systems. This is because a distinctive characteristic of a human is his spirit and one cannot replace humans with AI without creating proportionate or appropriate spirits for the AI systems. Sooner or later our AI systems would have epistemological functions and possess spirits. The place of the soul for such AI systems would be attained as well. If human knowledge, beliefs, voices, clips and laws can be preserved long after they are gone as is possible in smart digital technologies, then spirit-based AI would indeed cause these humans to live forever. If the feat of a spirit enamored AI is near, then why worry? Indeed when considering that humans possess good and bad spirits (from the epistemology of rational and irrational inertia) then these AI systems would of course have good or bad spirits and be bad or good AI.
 
 Would integrating a spirit into a rule-based or machine learning algorithmic structure of an AI system have benefits? Yes!, profound benefits too. A spirit-based AI would of course make possible the "possession of feelings" by AI systems, a feat unattainable in both algorithmic, operational and inferential basis of AI systems today. This inability of AI systems to have feelings has continued to remain a major setback in the acceptability (indeed trust) and utilization of AI. As we agree that the spirit in AI is possible, then overlooking efforts aimed at making this possible or allowing AI to attain this level unhindered (admitting dangers of human involvement in AI) could pose a dangerous threat which can become highly destructive to mankind. This calls for critical supervision (TD-based ethical policing) and the accompanying of the evolution, development and applicability of AI hence venerating the need for human mediation in AI both as a major TD research subject and applicable function.
 
 A discussion would be made on my approach which considers the synergy of critical systems heuristics (CSH) and systems engineering (Transdisciplinary Systems Engineering) to create "Transdisciplinary AI" which would formulate methods of integrating "human" epistemology in expert systems. Human epistemology is emphasized because by the maturity of this future nature of AI, there would be terms known as "AI or machine epistemology" or "AI or machine spirits". The investigation begins with creating expert systems (knowledge-based systems) with these functions with plans of moving into robotics and other machine learning arena. Finally, to move the needle on what is considered permissible epistemology or permissible spirit of/in AI is a critical component of the study of human mediation and AI which must be given critical attention. This would be discussed as well.
- Research Article
1
- 10.1609/aies.v7i1.31713
- Oct 16, 2024
- Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
What to Trust When We Trust Artificial Intelligence Abstract: So-called “trustworthy AI” has emerged as a guiding aim of industry leaders, computer and data science researchers, and policy makers in the US and Europe. Often, trustworthy AI is characterized in terms of a list of criteria. These lists usually include at least fairness, accountability, and transparency. Fairness, accountability, and transparency are valuable objectives, and they have begun to receive attention from philosophers and legal scholars. However, those who put forth criteria for trustworthy AI have failed to explain why satisfying the criteria makes an AI system—or the organizations that make use of the AI system—worthy of trust. Nor do they explain why the aim of trustworthy AI is important enough to justify devoting resources to achieve it. It even remains unclear whether an AI system is the sort of thing that can be trustworthy or not. To explain why fairness, accountability, and transparency are suitable criteria for trustworthy AI one needs an analysis of trustworthy AI. Providing an analysis of trustworthy AI is a distinct task from providing criteria. Criteria are diagnostic; they provide a useful test for the phenomenon of interest, but they do not purport to explain the nature of the phenomenon. It is conceivable that an AI system could lack transparency, accountability, or fairness while remaining trustworthy. An analysis of trustworthy AI provides the fundamental features of an AI system in virtue of which it is (or is not) worthy of trust. An AI system that lacks these features will, necessarily, fail to be worthy of trust. This paper puts forward an analysis of trustworthy AI that can be used to critically evaluate criteria for trustworthy AI such as fairness, accountability, and transparency. In this paper we first make clear the target concept to be analyzed: trustworthy AI. We argue that AI, at least in its current form, should be understood as a distributed, complex system embedded in a larger institutional context. This characterization of AI is consistent with recent definitions proposed by national and international regulatory bodies, and it eliminates some unhappy ambiguity in the common usage of the term. We further limit the scope of our discussion to AI systems which are used to inform decision-making about qualification problems, problems wherein a decision-maker must decide whether an individual is qualified for some beneficial or harmful treatment. We argue that, given reasonable assumptions about the nature of trust and trustworthiness, only AI systems that are used to inform decision-making about qualification problems are appropriate candidates for attributions of (un)trustworthiness. We then distinguish between two models of trust and trustworthiness that we find in the existing literature. We motivate our account by highlighting this as a dilemma in in the accounts of trustworthy AI that have previously been offered. These accounts claim that trustworthiness is either exclusive to full agents (and it is thus nonsense when we talk of trustworthy AI), or they offer an account of trustworthiness that collapses into mere reliability. The first sort of account we refer to as an agential account and the second sort we refer to as a reliability account. We offer that one of the core challenges of putting forth an account of trustworthy AI is to avoid reducing to one of these two camps. It is thus a desideratum of our account that it avoids being exclusive to full moral agents, while it simultaneously avoids capturing things such as mere tools. We go on to propose our positive account which we submit avoids these twin pitfalls. We subsequently argue that if AI can be trustworthy, then it will be trustworthy on an institutional model. Starting from an account of institutional trust offered by Purves and Davis, we argue that trustworthy AI systems have three features: they are competent with regard to the task they are assigned, they are responsive to the morally salient facts governing the decision-making context in which they are deployed, and they publicly provide evidence of these features. As noted, this account builds on a model of institutional trust offered by Purves and Davis and an account of default trust from Margaret Urban Walker. The resulting account allows us to accommodate the core challenge of finding a balance between agential accounts and reliability accounts. We go on to refine our account, answer objections, and revisit the list criteria from above as explained in terms of competence, responsiveness, and evidence.
- Research Article
- 10.46914/2959-4197-2025-1-2-31-44
- Jun 27, 2025
- Eurasian Scientific Journal of Law
The article is devoted to the analysis of the legal nature and regulation of artificial intelligence in the context of rapid digital development. The paper compares approaches to legal regulation of AI in Kazakhstan, the USA, the European Union and China, identifies their fundamental differences and points of intersection. Special attention is paid to the draft Digital Code of the Republic of Kazakhstan and the Concept of Artificial Intelligence Development for 2024–2029, which reflect an attempt to build a holistic legal model combining ethical norms, technical standards and mechanisms of legal accountability. The article reveals differences in regulatory philosophy reflected in national strategies, regulations, and ethical declarations. The conclusion is drawn about the need for flexible, adaptive and multi-layered legal regulation that can take into account both the technical characteristics of AI systems and the risks associated with their autonomy and impact on fundamental rights. The results of the study indicate the importance of moving from declarative norms to operational mechanisms, including the legal status ofAI, certification of algorithms, ethical audit, transparency of decisions and allocation of responsibility.
- Research Article
- 10.7256/2454-0706.2025.3.73708
- Mar 1, 2025
- Право и политика
The subject of the research is modern technologies of generative artificial intelligence (GII), their impact on society and law (using the example of China). The rapid development of GII is associated with the growth of venture capital investments and active support from large technology companies and states. Since 2022, China has adopted a number of laws on the regulation of artificial intelligence. At the same time, the PRC focuses on the unconditional protection of state security and national interests. An important aspect of AI regulation in China is the desire to form an AI bill that significantly expands the regulatory architecture. It is expected that the bill will be adopted during 2025, which will contribute to a more complete and detailed regulation of artificial intelligence. In the course of the research, the author used the following methods of cognition (research methodology): dialectical method of cognition, general scientific empirical methods of cognition (comparison and description), general scientific theoretical methods of cognition (generalization and abstraction, induction and deduction, analogy), as well as private scientific empirical methods of cognition (method of interpretation of legal norms) and private scientific theoretical methods cognition (legal and dogmatic). The main conclusions of the study are as follows. To date, the draft law on AI proposed by Chinese legal scholars is still under discussion, but it is already clear that it significantly complements and expands the already established architecture of legal regulation of artificial intelligence in the People's Republic of China. It contains a lot of bold ideas (for example, about the legal protection of data obtained as a result of the work of the GII). It seems that during 2025, the specified draft law (apparently with improvements) will be adopted. Based on the existence of regulatory legal acts that have already entered into force and are currently in force regarding artificial intelligence (including generative), as well as trends towards the rapid formation of the basic law on AI, it clearly follows that China is following the path of legal regulation of this area for general use within the PRC, while giving freedom of use and study AI for government purposes, in order to protect national interests.
- Research Article
17
- 10.1109/mic.2021.3101919
- Sep 1, 2021
- IEEE Internet Computing
AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by the inability to fully trust an AI system that it will not harm a human. Besides, fairness, privacy, transparency, and explainability are vital to developing trust in AI systems. As stated in Describing Trustworthy AI,aa.https://www.ibm.com/watson/trustworthy-ai. “Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand.” The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations directly related to decision-making, similar to how a domain expert makes decisions based on “domain knowledge,” including well-established, peer-validated explicit guidelines. To understand and validate an AI system's outcomes (such as classification, recommendations, predictions) that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use. Contemporary XAI methods are yet addressed explanations that enable decision-making similar to an expert. Figure 1 shows the stages of adoption of an AI system into the real world.
- Supplementary Content
16
- 10.3389/fpsyg.2022.836650
- Mar 4, 2022
- Frontiers in Psychology
The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent's responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent's actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity.
- Research Article
12
- 10.1016/j.infoandorg.2023.100498
- Dec 14, 2023
- Information and Organization
From worker empowerment to managerial control: The devolution of AI tools' intended positive implementation to their negative consequences
- Research Article
- 10.18523/2617-2607.2025.15.19-44
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.4-18
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.140-147
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.155-164
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.78-85
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.124-132
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.3
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.114-123
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.148-154
- May 25, 2025
- NaUKMA Research Papers. Law
- Research Article
- 10.18523/2617-2607.2025.15.133-139
- May 25, 2025
- NaUKMA Research Papers. Law
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.