Abstract This paper aims to carry out empirical analysis of the viability of large language models (LLMs), specifically ChatGPT, in simulating the common law system and facilitating its evolutionary processes. Drawing on the Theory of Rules Evolution, it is understood that common law generates efficient rules by natural selection through constant litigation. Nonetheless, this evolutionary mechanism faces several hindrances. The process of change is typically slow and incremental. Courts often have to wait for a case that’s deemed ‘appropriate’ before they can change the law, leading to extended delays. Additionally, courts frequently struggle to make efficient decisions due to limited information. Other factors that decelerate the creation of efficient rules include judicial bias, unequal distribution of resources among litigating parties, and the diminishing presence of a competitive legal order. This study first assesses ChatGPT’s capability to embrace the essence of the common law system, namely the doctrine of stare decisis. We then assess its potential to overcome the hindrances in common law development and promote efficient rules. Through a series of meticulously designed hypothetical cases set in a virtual jurisdiction called the “Matrix Kingdom,” we observed that ChatGPT mimic the functions of a common law court by citing, following, and distinguishing its own precedents, but it accomplishes this with significantly fewer resources and in less time. This implies that humans can introduce hypothetical legal situations, enabling LLMs to replicate the natural selection process observed in the common law system but with a significantly accelerated pace. Given that LLMs are trained with diverse information sources, not just the factual contexts of cases, they could potentially lower the informational constraints in decision-making. As such, LLMs might significantly contribute to the evolutionary processes of common law development. However, it is important to remain cautious of certain limitations, such as the potential for AI Hallucination and inherent biases in LLMs, which require careful consideration and management.
Read full abstract