Abstract

ChatGPT is a sophisticated large-language model able to answer high-level questions in a way that is does not trigger conventional plagiarism detectors. Concerns have been raised that and similar forms of ‘generative AI’ pose a significant threat to academic integrity in higher education. To evaluate this risk in the context of legal education specifically, this project had ChatGPT (using the GPT3.5 model available in January 2023) generate answers to twenty-four different exams from an English-language law school based in a common law jurisdiction. It found that the system performed best on exams that were essay-based and asked students to discuss international legal instruments or general legal principles not necessarily specific to any jurisdiction. It performed worst on exams that featured problem-style or “issue spotting” questions asking students to apply an invented factual scenario to local legislation or jurisprudence. While the project suggests that for the most part conventional law school assessments are for the time being relatively immune from the threat generative AI brings, the project provides only a baseline snapshot of how large-language models tackle assessment in higher education. As both the technology improves and students learn how to harness it, increasingly fewer forms of assessment will be beyond its reach. However, rather than attempt to block students from using AI as part of learning and assessment, this paper instead proposes three ways students may be taught to use it in appropriate and ethical ways. While it is clear that generative AI will change how universities teach and assess (across disciplines), a solution of prevention or denial is no solution at all.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call