Abstract
INTRODUCTION. The use of artificial intelligence technologies (hereinafter referred to as “AI”) is characterized by the mediation of human actions by autonomous processes, which leads, in the case when technical expertise is unable to identify the causer of harm, to a “responsibility gap” is an undesirable legal phenomenon in which the imposition of responsibility for harm caused by the use of AI on a specific person (persons) for the rules of tort liability are impossible.MATERIALS AND METHODS. The research used general scientific and special methods, including the historical method, methods of formal logic, analysis, synthesis, as well as systemic and comparative legal methods.RESEARCH RESULTS. To eliminate the “responsibility gap”, the article proposes a mechanism that allows to fill in the missing elements of a tort committed using AI when the error that led to harm cannot be attributed de lege lata to any participant in the life cycle of an AI system or application. The starting point for the development of this mechanism was the theory of “guidance control” over the use of AI. A legal understanding of the philosophical foundations of the theory of “guidance control” allows us to substantiate the general legal principle of allocating responsibility for harm caused by AI, according to which the legal responsibility is borne by the person obliged to exercise human control over the use of the AI system or application, unless other perpetrators are identified. This principle is gradually being accepted by the international legal doctrine, which is expressed in the designation of the need to control the use of AI in a number of international documents.CONCLUSIONS. Provided that the protocol to the Treaty on the EAEU enshrines the general legal principle of responsibility of the controlling person for harm caused by AI, it can acquire the significance of a regional international legal principle, and thereby become the basis for the formation of regulatory regulation in the EAEU of the distribution of responsibility for harm caused by AI. The proposed toolkit is convenient for legal consolidation through supranational legal regulation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have