Explaining 3D Object Detection Through Shapley Value-Based Attribution Map

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Artificial Intelligence (AI)-based 3D object detection utilizing point clouds from LiDAR sensors has become widespread in various applications, such as autonomous driving. However, the lack of transparency in AI decision-making can result in inaccurate detections in unknown situations, potentially posing safety risks. Although explainable AI (XAI) has recently gained attention as a method to elucidate the rationale behind AI inferences, most existing methods are designed for image-based tasks, with limited methods specifically addressing point clouds and object detection. In this study, we propose 3D-SVAM, which provides explanations for 3D object detection through Shapley Value-based Attribution Map. The Shapley value can justify explanations by adhering to desirable properties for interpretability. Despite its substantial computational complexity, our method efficiently mitigates this complexity by introducing a suitable approximation. Our method shows superior performance compared to the state-of-the-art method through quantitative evaluations. Moreover, we demonstrate applications of our method, such as analyzing detection robustness to changes in point cloud distribution and correcting false detection by identifying points that negatively contribute to the prediction. These findings clarify the properties of 3D object detection and enhance the practical application of XAI.

Save Icon
Up Arrow
Open/Close