Abstract

Introduction. The development and spread of robots, artificial intelligence systems and complex automated information systems are associated with the problem of causing harm by their decisions and actions, as well as the problem of legal liability for this harm. Theoretical analysis. One of the main functions of legal liability is general and private prevention. When applied to robots, it requires them to be reprogrammed, retrained, or eliminated. Thus, the issue of the possibility, forms and conditions of their existence is directly related to the problem of legal responsibility of autonomous and sometimes unpredictable software and hardware mechanisms. A systemic legal structure aimed at ensuring safety and predictability in the creation and operation of robots can be built on the basis of a classifying standard, and each class will be associated with certain forms and models of responsibility. Empirical analysis. The basis of the legal classification of robots and complex automated information systems will be the threats associated with causing harm as a result of their spontaneous actions and decisions, correlated with the forms of legal liability. The following threats can be identified: causing the death of a person; unlawful change in the legal status of the subject; causing material harm; violation of the personal non-property rights of a person; information or other property of the owner (user), not related to causing harm to third parties; the threat of illegal behavior of robots. Results. The authors propose a classification of robots and complex automated systems, as well as approaches to legal liability and security for each class, and indicate directions for promising development of legal and technical standards necessary to ensure this classification and certification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call