The introduction of statistical ‘legal tech’ raises questions about the future of law and legal practice. While technologies have always mediated the concept, practice, and texture of law, a qualitative and quantitative shift is taking place: statistical legal tech is being integrated into mainstream legal practice, and particularly that of litigators. These applications – particularly in search and document generation – mediate how practicing lawyers interact with the legal system. By shaping how law is ‘done’, the applications ultimately come to shape what law is. Where such applications impact on the creative elements of the litigator’s practice, for example via automation bias, they affect their professional and ethical duty to respond appropriately to the unique circumstances of their client’s case – a duty that is central to the Rule of Law. The statistical mediation of legal resources by machine learning applications must therefore be introduced with great care, if we are to avoid the subtle, inadvertent, but ultimately fundamental undermining of the Rule of Law. In this contribution we describe the normative effects of legal tech application design, how they are potentially (in)compatible with law and the Rule of Law as normative orders, particularly with respect to legal texts which we frame as the proper source of ‘lossless law’, uncompressed by statistical framing. We conclude that reliance on the vigilance of individual lawyers is insufficient to guard against the potentially harmful effects of such systems, given their inscrutability, and suggest that the onus is on the providers of legal technologies to demonstrate the legitimacy of their systems according to the normative standards inherent in the legal system.
Read full abstract