A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery — generally referred to as “technology-assisted review” (TAR) — increasingly rely on “predictive coding,” machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships — and the ways in which these systems are shaping the construction and presentation of knowledge — lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in this space — the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice — we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under-reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis-)alignment of machine reasoning with professional logics and ethics. Based on our findings, we conclude that predictive coding tools — and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice — challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex, and more consequential, it is unreasonable to rely solely on legal professionals — judges, law firms, and lawyers — to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgement of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions — e.g., whether documents are responsive, whether the standard of proportionality has been met — is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability — providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality remain salient and demand engagement from lawyers, not just their technical experts.
Read full abstract