Due to the inherent characteristic of deep learning models, prior researchers have focused on studying adversarial strategies aimed at uncovering their vulnerabilities. However, these strategies have been primarily been confined to the realm of image processing. When considering 3D vision structures, such as point clouds and meshes, the robustness of learning models also diminishes significantly in the presence of discernible outliers, and these strategies often fail to generalize across different DNN architectures. In this paper, a simple adversarial perturbation and optimization attack framework, Tangent Plane Perturbation and Synthesizing Optimization Attack (TPSOA) is presented. TPSOA provides a point cloud projection direction for point clouds, subtly inducing imperceptible perturbations on their tangent planes. Furthermore, acknowledging the geometry form of adversarial generation and its potential for 3D printing in physical scenarios, we reconstruct the adversarial mesh alongside the adversarial point cloud. A synthesized loss, incorporating both the Sobolev loss and the Chamfer loss, is utilized to optimize the distances between the adversarial generation and ground truth. Through a combination of visualization and quantitative analysis experiments, we validate the imperceptible nature of TPSOA, the necessity of the loss terms, and the magnitude of perturbations. These experiments that TPSOA enables targeted point cloud attacks that are imperceptible and of minor magnitude. In real-world adversarial attack and defense scenarios, as evidenced by both white-box and black-box experiments, the results show that TPSOA surpasses other state-of-the-art adversarial attack models, effectively manipulating a greater number of victim models while maintaining high efficiency, all without compromising its original structural integrity.
Read full abstract