Abstract

Explainable Artificial Intelligence (XAI) methods demonstrate internal representation of data hidden within neural network trained weights. That information, presented in a form readable to humans, could be remarkably useful during model development and validation. Among others, gradient-based methods such as Grad-CAM are broadly used in an image processing domain. On the other hand, the autonomous vehicle sensor suite consists of auxiliary devices such as radars and LiDARs, for which existing XAI methods do not apply directly. In this article, we present our adaptation approach to utilize Grad-CAM visualization for LiDAR pointcloud specific object detection architectures used in automotive perception systems. We try to solve data and network architecture compatibility problems and answer the question whether Grad-CAM methods could be used with LiDAR sensor data efficiently. We showcase successful results of our method and all the benefits that come with a Grad-CAM XAI application to a LiDAR sensor in an autonomous driving domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call