Abstract

Deep learning based classifiers on 3D point cloud data have been shown vulnerable to adversarial examples, while a defense strategy named Statistical Outlier Removal (SOR) is widely adopted to defend adversarial examples successfully, by discarding outlier points in the point cloud. In this paper, we propose a novel white-box attack method, Joint Gradient Based Attack (JGBA), aiming to break the SOR defense. Specifically, we generate adversarial examples by optimizing an objective function containing both the original point cloud and its SOR-processed version, for the purpose of pushing both of them towards the decision boundary of classifier at the same time. Since the SOR defense introduces a non-differentiable optimization problem, we overcome the problem by introducing a linear approximation of the SOR defense and successfully compute the joint gradient. Moreover, we impose constraints on perturbation norm for each component point in the point cloud instead of for the entire object, to further enhance the attack ability against the SOR defense. Our JGBA method can be directly extended to the semi white-box setting, where the values of hyper-parameters in the SOR defense are unknown to the attacker. Extensive experiments validate that our JGBA method achieves the highest performance to break both the SOR defense and the DUP-Net defense (a recently proposed defense which takes SOR as its core procedure), compared with state-of-the-art attacks on four victim classifiers, namely PointNet, PointNet++(SSG), PointNet++(MSG), and DGCNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call