Abstract
Persuasive writing is an essential skill for a lawyer. Budding lawyers hone their persuasive writing skills during their studies, in part through essay-style examinations. However, Large Language Models (LLMs) have recently proved adept at a broad range of language tasks. They could undermine the utility of many existing forms of law school assessment by allowing students to generate essays artificially. This paper explores the extent to which those concerns are warranted. This study first explores the constituent elements of persuasive legal writing and reviews the available literature on an LLM’s competence in each area. It then evaluates whether OpenAI’s powerful LLM, GPT-4, can produce essay-style answers to a post-graduate law school exam on legal theory. The GPT-4 output is compared to essays written by actual honors students by having them blind-graded by human graders using the subject’s examination rubric. The study finds that GPT-4 cannot match the honors students. Whilst it can produce essays which are of a passable grade, there are significant challenges producing higher quality content. The paper closes with observations about the experience, prompt engineering, LLM-bias and the technology’s implications for the legal profession.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.