Abstract

Deep learning models have been demonstrated vulnerable to adversarial attacks even with imperceptible perturbations. As such, the reliability of existing deep neural networks-based autonomous driving systems can suffer. However, deep 3D models have applications in various Cyber-Physical Systems (CPSs) with safety-critical requirements, particularly autonomous driving systems. In this paper, the robustness of deep 3D object detection models under adversarial point cloud perturbations has been investigated. A novel method is developed to generate 3D adversarial examples from point cloud perturbations, which are common due to the intrinsic characteristics of the data captured by 3D sensors, e.g., LiDAR. The generation of adversarial samples is supervised by a dual loss, which constitutes an adversarial loss and a perturbation loss. The adversarial loss produces a point cloud with the property of aggressiveness, while the perturbation loss enforces the produced point cloud subject to visual imperception. We demonstrate that the method can successfully attack 3D object detection models in most cases, and expose their vulnerability to physical-world attacks in the form of point cloud perturbations. We perform a thorough evaluation of popular deep 3D object detectors in an adversarial setting on the KITTI vision benchmark. Experimental results show that current deep 3D object detection models are susceptible to adversarial attacks in the context of autonomous driving, and their performances are degraded by a large margin in the presence of adversarial point clouds generated by the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call