Abstract

Introduction: when studying legal issues related to safety and adequacy in the application of artificial intelligence systems (AIS), it is impossible not to raise the subject of liability accompanying the use of AIS. In this paper we focus on the study of the civil law aspects of liability for harm caused by artificial intelligence and robotic systems. Technological progress necessitates revision of many legislative mechanisms in such a way as to maintain and encourage further development of innovative industries while ensuring safety in the application of artificial intelligence. It is essential not only to respond to the challenges of the moment but also to look forward and develop new rules based on short-term forecasts. There is no longer any reason to claim categorically that the rules governing the institute of legal responsibility will definitely not require fundamental changes, contrary to earlier belief. This is due to the growing autonomy of AIS and the expansion of the range of their possible applications. Artificial intelligence is routinely employed in creative industries, decision-making in different fields of human activity, unmanned transportation, etc. However, there remain unresolved major issues concerning the parties liable in the case of infliction of harm by AIS, the viability of applying no-fault liability mechanisms, the appropriate levels of regulation of such relations; and discussions over these issues are far from being over. Purpose: basing on an analysis of theoretical concepts and legislation in both Russia and other countries, to develop a vision of civil law regulation and tort liability in cases when artificial intelligence is used. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods: legal-dogmatic and the method of interpretation of legal norms. Results: there is considerable debate over the responsibilities of AIS owners and users. In many countries, codes of ethics for artificial intelligence are accepted. However, what is required is legal regulation, for instance, considering an AIS as a source of increased danger; in the absence of relevant legal standards, it is reasonable to use a tort liability mechanism based on analogy of the law. Standardization in this area (standardization of databases, software, infrastructure, etc.) is also important – for identifying the AIS developers and operators to be held accountable; violation of standardization requirements may also be a ground for holding them liable under civil law. There appear new dimensions added to the classic legal notions such as the subject of harm, object of harm, and the party that has inflicted the harm, used with regard to both contractual and non-contractual liability. Conclusions: the research has shown that legislation of different countries currently provides soft regulation with regard to liability for harm caused by AIS. However, it is time to gradually move from the development of strategies to practical steps toward the creation of effective mechanisms aimed at minimizing the risks of harm without any persons held liable. Since the process of developing AIS involves many participants with an independent legal status (data supplier, developer, manufacturer, programmer, designer, user), it is rather difficult to establish the liable party in case something goes wrong, and many factors must be taken into account. Regarding infliction of harm to third parties, it seems logical and reasonable to treat an AIS as a source of increased danger; and in the absence of relevant legal regulations, it would be reasonable to use a tort liability mechanism by analogy of the law. The model of contractual liability requires the development of common approaches to defining the product and the consequences of violation of the terms of the contract.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call