Abstract
How do we apply liability rules in cases in which hardware, software, and data all come from different sources, and in which humans are involved all at the same time or in sequence? What roles do our laws play in situations in which AI is involved? Most essentially, how do laws help us deal with the risks and prevent harms, and also hold the right individuals responsible for AI-caused or AI-facilitaed harms when they occur? This paper aspires to provide some answers to these questions. The paper supports a multifaceted approach to liability for AI-facilitated wrongs, which would require identifying different risk pools and applying rules of collective liability together with rules enforcing or incentivizing the (re)allocation of (individual) liability within separate risk pools. The paper surveys and assesses different AI liability schemes against the backdrop of the human tendency to either underestimate—when systems work well—or overestimate—when systems fail—the role of humans and other non-technological factors in the process. It also returns to Calabresi and Melamed’s classic theory and discusses the role that liability rules will play in the context of AI-facilitated wrongs and in the allocation of legal entitlements and liability costs, ideally to the parties that can most inexpensively avoid them.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have