Abstract

How do we apply liability rules in cases in which hardware, software, and data all come from different sources, and in which humans are involved all at the same time or in sequence? What roles do our laws play in situations in which AI is involved? Most essentially, how do laws help us deal with the risks and prevent harms, and also hold the right individuals responsible for AI-caused or AI-facilitaed harms when they occur? This paper aspires to provide some answers to these questions. The paper supports a multifaceted approach to liability for AI-facilitated wrongs, which would require identifying different risk pools and applying rules of collective liability together with rules enforcing or incentivizing the (re)allocation of (individual) liability within separate risk pools. The paper surveys and assesses different AI liability schemes against the backdrop of the human tendency to either underestimate—when systems work well—or overestimate—when systems fail—the role of humans and other non-technological factors in the process. It also returns to Calabresi and Melamed’s classic theory and discusses the role that liability rules will play in the context of AI-facilitated wrongs and in the allocation of legal entitlements and liability costs, ideally to the parties that can most inexpensively avoid them.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.