Abstract

Automated decision-making by AI algorithms is increasingly likely to cause civil liability. However, AI algorithms based on machine learning techniques are less explainable due to technical inscrutability arising from the nature of the learning methods itself, legal opacity due to protection of trade secrets or intellectual property rights, or incomprehensibility of the general public or judges due to the complexity and counterintuitiveness of algorithms, thus making its judicial review difficult. When the mechanism of an AI algorithm is at issue in a lawsuit, the question is how an opaque AI algorithm can be evaluated by experts and reviewed by a court. This article first introduces ACCC v. Trivago decision made by the Federal Court of Australia in 2020, focusing on how the experts appointed by each party presented their opinions on the AI algorithm and how the court drew its conclusion based on their opinions. It then studies issues and solutions that may arise in examining opaque AI algorithms in the Korean civil litigation procedures, making comparison with the Trivago decision. It explains basic features of Explainable AI(XAI) methods including Ante hoc methods and post hoc methods. It then points out problems of the Civil Procedure Act and Intellectual Property laws of South Korea in disclosing data necessary for the experts’ analysis on the AI algorithms and in protecting the trade secrets contained in the produced data, and makes suggestions on how to solve those problems. Lastly, it recommends a more active use of party-appointed experts in the judicial review of the AI algorithms by allowing their active participations throughout the litigation procedure in order to clarify issues and deepen the court’s scientific understanding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call