Abstract

Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call