Abstract

Artificial intelligence systems present various types of risks, but are generally not subject to comprehensive government regulation at this time. Soft law measures such as private standards have been developed to fill this governance gap, but a major limitation of such standards is that they are not directly enforceable, and so many companies may not comply with them. This article examines the thesis that compliance with soft law standards may provide some protection from product liability, thus providing additional incentive for companies to adopt and comply with such standards. Using U.S. case law as the focus of analysis, compliance with private standards does not provide an absolute shield against liability, but can provide persuasive evidence of due care that can diminish liability risks in most U.S. state jurisdictions. Conversely, while failure to abide by soft law standards does not automatically result in liability, it can provide probative evidence that a company has not exercised due care, and thus should be held liable. Moreover, compliance with soft law standards can provide an effective defense against punitive damages in many States if a company’s AI system does harm people or property and is held liable for compensating such damages. These findings suggest that protecting against such liability provides a powerful incentive for companies to comply with soft law standards.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call