Abstract

The reliability of existing Cyber-Physical Systems(CPSs) developed on the basis of deep neural networks is not guaranteed, because deep models have been proved vulnerable to adversarial attacks with imperceptible perturbations. Considering that deep 3D models have potential applications to a wide variety of CPSs with safety-critical requirements, such as autonomous driving systems, we explore the robustness of deep 3D object detection models under adversarial point cloud perturbations in this paper. A novel method to generate 3D adversarial examples is developed from point cloud perturbations, which are common in practice due to the inherent characteristics of the LiDAR data produced by the 3D sensors. The method has the ability to successfully attack deep 3D models in most cases, and thereby gives rise to a great real-world concern about object detection models due to their vulnerability to physical-world attacks in the form of point cloud perturbations. We conduct a thorough robustness evaluation of popular deep 3D object detectors in an adversarial setting on the KITTI dataset. Experimental results show that current deep 3D object detection models are susceptible to adversarial attacks, and their performances are decreased to a large extent in the presence of adversarial point clouds generated by the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call