Abstract

Artificial intelligence technologies represent one of the “pervasive digital technologies” that predetermine the current landscape of society’s technological development. The key problem of applying artificial intelligence technologies from the point of view of law is the limits of making legally significant decisions based on artificial intelligence. Although on the surface this issue may seem quite new, in fact it has been investigated in legal science and interdisciplinary research for decades. These limits can be divided into paradigmatic, axiological and pragmatic ones. Paradigmatic limits are associated with fundamental notions about the meaning of agency in law, and they predetermine the principle of human participation in decisions made with the help of artificial intelligence. Axiological limits presuppose a consistent realization of certain formalized values with respect to new social relations. The most developed system of axiological limits follows from the European concept of human rights. Pragmatic limits are conditioned by peculiarities of both law itself and the technological architecture of artificial intelligence in a social context. Pragmatic limits in this sense include the problem of the “open texture” of legal language and the “semantic limits” of law. The principle of nondiscrimination of legal subjects in machine learning can also be considered as a pragmatic limit. The analysis of the modern regulatory landscape at the level of regulatory practices and self-regulation, focusing on representative international experience, makes it possible to verify the hypothesis of paradigmatic, axiological and pragmatic limits and to suggest their use in the further development of legal regulation of decision-making relations using artificial intelligence systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call