Abstract
The study of LiDAR-based 3D object detection and its robustness under adversarial attacks has achieved great progress. However, existing adversarial attack methods mainly focus on the targeted object, which destroys the integrity of the object and makes the attack easy to perceive. In this work, we propose a novel adversarial attack against deep 3D object detection models named the contextual attribution maps-guided attack (CAMGA). Based on the combinations of subregions in the context area and their impact on the prediction results, contextual attribution maps can be generated. An attribution map exposes the influence of individual subregions in the context area on the detection results and narrows down the scope of the adversarial attack. Subsequently, perturbations are generated under the guidance of a dual loss, which is proposed to suppress the detection results and maintain visual imperception simultaneously. The experimental results proved that the CAMGA method achieved an attack success rate of over 68% on three large-scale datasets and 83% on the KITTI dataset. Meanwhile, the CAMGA has a transfer attack success rate of at least 50% against all four victim detectors, as they all overly rely on contextual information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.