Abstract

Background: There is a lot of discussion in EU politics about trust in artificial intelligence (AI). Because it can be used as a lethal weapon we need (EU) regulations that take care of setting up a framework. Companies need guidance to develop their AI-based products and services in an acceptable manner. The research should help AI and machine learning practitioners to prepare for what is coming next, and which aspects they should focus on. Objective: In the present research, we aim to understand the role of “Responsible AI” from different perspectives, what constitutes the umbrella term “Responsible AI” and what terms define it. The research question is: “What are the aspects defining the ‘Responsible AI’?” Method: A structured literature review (SLR) was used as the research method. We searched four databases for relevant research results on “Responsible AI” in the last two years. 118 research papers were finally included in our study. Results: We found only three papers that try to define “Responsible AI”. They use concepts such as Fairness, Privacy, Accountability, Transparency, Ethics, Security & Safety, Soundness, and Explainability to define the “Responsible AI”. After studying all the 118 analyzed papers we strongly believe that the terms that are included in those 3 definitions are not enough; some are ambiguous, missing, or used as synonyms. We developed a four-dimensional representation of similarities and differences for defining “Responsible AI”. “Responsible AI” must be a human-centered approach and the concept must include the implementation of AI methods that focus on ethics, explainability of models as well as privacy, security, and trust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call