Abstract

Automated vehicles (“AV”) can greatly improve road safety and societal welfare, but legal systems have struggled with the prospect of whom to hold criminally liable for resulting harm, and how. This difficulty is derived from the characteristics of modern artificial intelligence (“AI”) used in AV technology. Singapore, France and the UK have pioneered legal models tailored to address criminal liability for AI misbehaviour. In this article, we analyse the three models comparatively both to determine their individual merits and to draw lessons from to inform future legislative efforts. We first examine the roots of the problem by analysing the characteristics of modern AI vis-à-vis basic legal foundations underlying criminal liability. We identify several problems, such as the epistemic problem, a lack of control, the issue of generic risk, and the problem of many hands, which discommode the building blocks of criminal negligence such as awareness, foreseeability and risk taking – a condition we refer to as negligence failures. Subsequently, we analyse the three models on their ability to address these issues. We find diverging philosophies as to where to place the central weight of criminal liability, but nevertheless identify common themes such as drawing bright-lines between liability and immunity, and the introduction of novel vocabulary necessary to navigate the new legal landscape sculpted by AI. We end with specific recommendations for future legislation, such as the importance of implementing an AI training and licensing regime for users, and that transition demands must be empirically tested to allow de facto control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call