Abstract

The adversarial attack is a cutting-edge technique used to study the vulnerability of deep neural networks (DNNs). However, most existing studies focus on the additive perturbation-based attack, which cannot represent real-world corruption and limit their applications. In particular, haze is a common natural phenomenon that significantly corrupts an image, which inevitably poses a potential threat to deep models. In this work, for the first attempt, we study the effects of haze on DNNs from the perspective of adversarial attacks and propose two adversarial haze attack methods. We first propose the optimization-based adversarial haze attack (OAdvHaze) that optimizes the parameters of the atmospheric scattering model with the guidance of a DNN to synthesize a hazy image, which leads to a high attack success rate. To achieve a more efficient attack, we further propose a predictive adversarial haze attack (PAdvHaze) employing a DNN to predict the hazing parameters through a one-step way. To validate the effectiveness of both methods, we conducted extensive experiments on two publicly available datasets, i.e., ILSVRC 2012 and NIPS 2017. OAdvHaze and PAdvHaze achieve comparable attack success rates and transferability to state-of-the-art attacks. This work would contribute to the evaluation and enhancement of the robustness of DNNs against haze perturbation that may happen in the real world.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.