Abstract

“Trustworthy AI” is the concept of the European Commission to facilitate acceptance and diffusion of Artificial Intelligence in Europe. The concept claims that European AI applications shall be lawful, ethical and robust, both from a technical and societal perspective. The contribution asks for the state of play of implementing the concept of Trustworthy AI. More concretely, it sets out to identify concrete cases of implementing Trustworthy AI in order to analyse approaches and experiences. However, it turns out that such projects currently only exist in a research context and at neither large companies nor start-ups or medium-sized companies provide suitable examples, with only a few exceptions. This gives rise to the question, why companies today ignore or even avoid the carefully worked out guidelines to implement Trustworthy AI. Three answers are given which refer to time-to-market considerations, different mindsets of software engineers and social scientists, and the fact that implementing Trustworthy AI requires of firms to go the extra mile with additional expertise and governance structures. Following this, two possibilities are presented to increase in the number of companies actually picking up on the guidelines and concretely implementing Trustworthy AI. These possibilities are firstly to break down existing implementation guidelines to the requirements of software engineers, computer scientists and managers, and secondly to embed social scientists and stakeholders in the implementation process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call