Abstract

The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. In this chapter, we engage in a critical discussion of the concept of trustworthy AI by probing the concept both on theoretical and practical grounds, assessing its substance and the feasibility of its intent. We offer a concise overview of the guidelines and their vision for trustworthy AI and examine the conceptual underpinnings of trustworthy AI by considering how notions of 'trust' and 'trustworthiness' have been discussed in the philosophical literature. We then discuss several epistemic obstacles and moral requirements when striving to achieve trustworthy AI in practice before concluding with an argument in support of the establishment of a trustworthy AI culture that respects and protects foundational values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call