Abstract

Computer vision tasks, such as image classification, semantic segmentation, and super resolution, are broadly utilized in many applications. Recent studies revealed that machine learning-based models for the computer vision tasks are vulnerable to adversarial attacks. Since the adversarial attack can disturb the computer vision models in real-world systems, many countermeasures have been proposed against the adversarial attacks, such as denoising, resizing, and machine learning-based super resolution model as a preprocessing. Recently, a prior work demonstrated that the super resolution model as a preprocessing can be vulnerable to the adversarial attack targeted to the preprocessing itself, only when the perturbation is inactive before the preprocessing. However, we also found that the perturbation before the preprocessing can be another serious threat if the super resolution model is used for a mitigation of adversarial attacks. In this paper, we propose Layered Adversary Generation (LAG) that generates the adversarial example by recursively injecting noises to clean image in white-box environment. We then show that LAG is effective to attack a semantic segmentation model even if the super resolution models with/without two countermeasures as auxiliary methods such as resizing and denoising are adopted to mitigate the adversarial attacks. Furthermore, we demonstrate that LAG is transferable across other super resolution models. Lastly, we discuss our attack method in gray-box and black-box environments, and suggests a mitigation for robust preprocessing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call