Abstract

The objective of this paper is to define the models of responsibility for intelligent systems in situations when harm is caused (in the form of any wrongdoings, including crimes). For this purpose, the paper examines the current state of artificial intelligence technologies from the standpoint of moral, volitional and intellectual autonomy for modeling approaches to their legal personality. Such autonomy can be expressed only through the software element of a technological system, that is, even in the case of robots (cyberphysical systems), their legal assessment requires an analysis of how operations are carried out in order to evaluate incoming information rather than the physical characteristics of such a system. The author analyzes approaches according to which intelligent systems can be compared with legal entities, individuals, animals, and meta-directional structures in terms of the volume and nature of their legal capacity. The conclusion is made about the need for an independent legal assessment of artificial intelligence systems beyond their comparison with the existing legal categories. The need to train a system using a limited dataset, that is, without additional training in a real environment, adversarial attacks and internal errors of intelligent systems are considered as examples of technical limitations of technology that do not allow to raise the question of its subjectivity at the moment. The author highlights that in order to determine responsibility for the harm caused by an intelligent system, it is necessary to establish a circle of persons between whom it is distrubuted: an intelligent system itself, its developer and the operator (user). Thus, the author defines 10 models of responsibility distribution between them.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call