Abstract

AI will change many aspects of the world we live in, including the way corporations are governed. Many efficiencies and improvements are likely, but there are also potential dangers, including the threat of harmful impacts on third parties, discriminatory practices, data and privacy breaches, fraudulent practices and even ‘rogue AI’. To address these dangers, the EU published ‘The Expert Group’s Policy and Investment Recommendations for Trustworthy AI’ (the Guidelines). The Guidelines produce seven principles from its four foundational pillars of respect for human autonomy, prevention of harm, fairness, and explicability. If implemented by business, the impact on corporate governance will be substantial. Fundamental questions at the intersection of ethics and law are considered, but because the Guidelines only address the former without (much) reference to the latter, their practical application is challenging for business. Further, while they promote many positive corporate governance principles—including a stakeholder-oriented (‘human-centric’) corporate purpose and diversity, non-discrimination, and fairness—it is clear that their general nature leaves many questions and concerns unanswered. In this paper we examine the potential significance and impact of the Guidelines on selected corporate law and governance issues. We conclude that more specificity is needed in relation to how the principles therein will harmonise with company law rules and governance principles. However, despite their imperfections, until harder legislative instruments emerge, the Guidelines provide a useful starting point for directing businesses towards establishing trustworthy AI.

Highlights

  • Artificial intelligence (AI) is becoming increasingly important for businesses

  • While they promote many positive corporate governance principles—including a stakeholder-oriented (‘human-centric’) corporate purpose and diversity, non-discrimination, and fairness—it is clear that their general nature leaves many questions and concerns unanswered

  • In this paper we examine the potential significance and impact of the Guidelines on selected corporate law and governance issues

Read more

Summary

Introduction

Artificial intelligence (AI) is becoming increasingly important for businesses. Most visible are the various AI-driven products and services—from self-driving cars to robotic trading of securities—that are either already in use or expected to emerge in the near future. In order to ‘anchor [...] more firmly in the development and use of AI’ the principles of a human-centric and ethics-by-design approach, the Commission in 2018 appointed an independent AI high-level expert group and tasked it with developing draft AI ethics guidelines.. In order to ‘anchor [...] more firmly in the development and use of AI’ the principles of a human-centric and ethics-by-design approach, the Commission in 2018 appointed an independent AI high-level expert group and tasked it with developing draft AI ethics guidelines.17 This group, the High-Level Expert Group on Artificial Intelligence (hereinafter the ‘Expert Group’), was given the mandate to draft two deliverables: (1) ethics guidelines on AI, and (2) policy and investment recommendations.. While the Guidelines are applicable to AI systems in general and a wide range of ‘AI practitioners’,21 this article will examine them from a company law and corporate governance perspective.

The Guidelines
Corporate Leadership and Oversight
Intra‐Corporate Oversight of AI
Involvement of Stakeholders
Diversity
Non‐Discrimination
Equal Access and Treatment
Corporate Purpose and Stakeholders
Moral Trade‐Offs
Stakeholder Trade‐Offs
Impact on Society
Technical Robustness and Safety
Unintentional Problems
Illegal Acts and ‘Rogue AI’
Privacy and Data Governance
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call