Abstract

Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.

Highlights

  • In the last few years, Artificial Intelligence (AI) computational methods, such as neural networks or knowledge-based systems, have been increasingly applied to different fields with generally excellent results

  • We focus on the context of Advanced Driver Assistance Systems (ADASs), which are electronic systems that are designed to support drivers in their driving task

  • We used the XRAI implementation provided in the Saliency package, version 0.0.5

Read more

Summary

Introduction

In the last few years, Artificial Intelligence (AI) computational methods, such as neural networks or knowledge-based systems, have been increasingly applied to different fields with generally excellent results. The latest techniques brought by sub-symbolism, such as ensembles or Deep Neural Networks, are related to “black box” techniques whose outputs are difficult to explain In this sense, the interpretability and explainability of the methods are currently key factors that need to be taken into consideration in the development of AI systems. Explainability goes a step further than interpretability by finding a humancomprehensive way to understand the decisions made by the algorithm In this sense, it is remarkable that in order to fully gauge the potential of the AI, systems of trust are needed [3]. Since 94% of traffic accidents are caused by human error [10], research in all areas related to ADAS development is essential

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.