Abstract

Bart Verheij’s paper (this volume, p. 187) on argumentation support software (ASS) gives an excellent account of the past and present of ASS for legal reasoning, and offers some tantalizing glimpses of what the future may have to offer. In my reply, I want to focus on one particular aspect of his presentation, the use of ASS as a teaching tool and in particular a tool for the teaching of reasoning with facts and evidence. Generally speaking, there are good reasons to be sceptical when artificial intelligence (AI) systems are presented as teaching aids. The search for commercial strength legal expert systems that perform autonomously the tasks of human experts has so far proved largely elusive. Two related issues in particular have been identified as recurrent problems. The first is robustness, i.e. the ability to deal with new scenarios not anticipated by the developers. Systems are said to be robust if they remain operational in circumstances for which they were not designed. In the context of criminal evidence, for instance, robustness would require adaptability to unforeseen crime scenarios. This objective is difficult to achieve because low-volume major crimes tend to be virtually unique. Each major crime scenario potentially consists of a unique set of circumstances, while many conventional AI techniques have difficulties in handling previously unseen problem settings. This then results in the second problem, the knowledge acquisition bottleneck. Reasoning about evidence in legal settings is knowledge intensive, requiring input from a broad range of scientific disciplines and also formal representations of large chunks of everyday knowledge. In teaching environments by contrast, the educator has control over the type of problems they choose, their complexity and relevant parameters and features. This brings teaching applications seemingly closer to the ‘worked examples’ or prototypes that are so often the result of the research programs by small teams of academics that dominate the AI and law field—including projects by the author of this reply. Verheij deserves considerable credit for resisting the temptation to see teaching applications just as a simpler task for AI research. Of particular value is his emphasis on rigorous empirical evaluation of the effectiveness of his systems in a teaching environment, and the systematic way in which evaluations that he has carried out in the past influence his theoretical analysis of the problem. This type of evidence-based approach to software-supported teaching in law has so far been missing. Indeed, with few exceptions such as Hall and Zeleznikow (2001), there has been little research into the empirical valuation of legal AI in general. His conclusions are refreshingly honest too, identifying some potential problems in his own approach and indicating a whole range of possible extensions and even wholesale revisions. My observations and comments will elaborate on these findings. In

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call