AI does not embody human intellectual capabilities, but rather aims to perform tasks that humans have been performing using intelligence. Today, it is difficult to imagine an AI judge that is identical to a human judge. Only when the results presented by AI algorithms replace the judgment of human judges can they be called AI judges. AI judges are discussed in the context of improving work efficiency and expectations for consistency and fairness in trials, but there are various constitutional issues. In particular, whether the right to a fair trial or the right to equality may be violated due to the bias of data and algorithms, and whether the right to a fair trial may be violated if the process and basis of the AI algorithm's results are unclear. The difficulty with these issues is that there are fundamental differences in views of fairness, and that attempts to make AI more transparent may compromise its accuracy and performance. Currently, weak AI is not capable of being a judge based on the nature of judicial work. This is because it judges correlation, not causation, and does not draw specific and valid conclusions in individual cases. It is also difficult to expect that people will approve of the judgment of an AI that lacks judicial virtues. Nevertheless, if AI is to be used in a trial, it must have good data and be able to explain its training data and basic working principles. It is possible to utilize AI as an aid in a trial, but it cannot be called an AI judge. There may be room for a flexible review of whether artificial intelligence judges can be introduced, considering the nature of the case.
Read full abstract