Abstract

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI is identified as a strategic technology and it is already part of our everyday life. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values: “to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems”. Moreover, in addition to the European General Data Protection Regulation (GDPR), a new European regulation on AI is in progress and it remarks once again the need to push for a human-centric responsible, explainable and trustworthy AI that empowers citizens to make more informed and thus better decisions. In addition, as remarked in the XAI challenge stated by the US Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, humankind requires a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call