Increasingly, in the mass media, we hear about examples of using predictive analytics systems to obtain solutions to legal disputes. However, from the viewpoint of legal regulation, the question arises: Can we consider a solution proposed by the system to be final and legally significant, or just one of a possible set of solutions? A parallel with legal principles is drawn in the scientific literature analyzing the prospects for such systems application. Researchers come to disappointing predictions about possible risks to human rights and freedoms if the solutions proposed by predictive systems are approved without human participation. In our study, we came to the following conclusions. Firstly, at the moment of technological development, intelligent systems cannot explain why they make certain decisions. Secondly, because the system’s decision-making is not transparent, it is incorrect to assume that programmers or developers replace the judge. The role of programmers and developers of an intelligent system model is very important but purely technical. Thirdly, the problem of inaccuracy in the system’s decisions refers only to the stage of system training. The higher the quality of the datasets and the more data sets there are, the more accurate the decision made by this technology will be. That is why forming correct datasets is an independent and challenging technological task.
Read full abstract