Abstract

Neural network quantization techniques play an important role in efficiently deploying deep learning models on the hardware with limited computing and storage resources. Numerous applications of this technology, such as autopilot, necessitate not just efficiency, but also robustness. Research on the robustness of quantized networks against adversarial attacks is becoming one of the major points of interest. In this work, we rethink the impact of quantization on adversarial attacks and explore the boundary of the robustness of quantized neural networks. This study reveals that activation quantization can be used as a defense to weaken adversarial noise, but the robustness of quantized models is still limited by the amplification effect of network errors, including quantization errors and adversarial noise. To address this problem, we propose the Quantization Adversarial Noise Suppression (QANS) method that employs a Gaussian kernel regularization constraint to stabilize the model by restricting the perturbation error within two levels of tolerance. Extensive experiments are conducted with Wide ResNet and VGG-16 models on CIFAR-10 and SVHN datasets under different attack methods, including several white-box and black-box attacks. Experimental results show the proposed method achieves superior robustness to prior works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call