Abstract

This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.

Highlights

  • Machine learning is the most prominent and economically relevant instantiation of artificial intelligence techniques (Royal Society 2017)

  • With legal requirements under the General Data Protection Regulation (GDPR) being largely unclear at the moment, it seems fruitful to inquire into the role of explainability in other legal areas

  • As we aim to highlight, artificial intelligence and the law are interwoven and, often, mutually reinforcing: while it is true that the law does set certain limits on the use of artificial intelligence, it and often in the same context, may require the use of explainable machine learned models for economic agents to fulfill their duties of care

Read more

Summary

Case studies

On medical diagnostics/malpractice and on corporate valuation/the business judgment rule, analyze recent advances in ML prediction tools from a legal point of view. They go beyond the current discussion around the data protection requirements of explainability to show that explainability is a crucial, but overlooked, category for the assessment of contractual and tort liability concerning the use of AI tools. 4 on explainability proper, implementing an exemplary spam classification, to discuss the trade-off between accuracy and explainability from both a technical and a legal perspective

Medical diagnostics
The rise of ML diagnostics
Legal liability
Short summary of case study 1
Mergers and acquisition
The rise of ML valuation tools
Legal liability: the business judgment rule
Short summary of case study 2
Key results from case studies 1 and 2: contractual explainability
Explanations and accuracy
Explanation type
ML model type
Example: automatic spam detection
Conclusion
Findings
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call