Examples of algorithms being involved in administrative practice are multiplying. However, the process of their use remains unsettled. For the legal evaluation of the results of algorithms’ decisions and subsequent legal regulation, it is important to distinguish between automated decision-making systems and machine learning systems. But if the prospects for the use of algorithms by both courts and the executive branch seem obvious, such use should certainly take into account possible risks and disadvantages. This is especially true for machine learning algorithms. First, human decision-making inevitably leads to the problem of scoring, which has not only an ethical but also a legal dimension. Second, there is the problem of algorithm errors. A statistical decision, even based on a working algorithm, can be wrong. Statistics using probability calculus, even when applied to a mass of cases, leads to probable rather than certain conclusions. This means that the personalized decision of a machine learning algorithm cannot be final, but must be “sanctioned” by a human. Hence the inevitability of human control.
Read full abstract