Abstract

The vulnerability of deep learning models to adversarial attacks is a growing concern, as the emergence of adversarial samples exposes almost all models to the risk of such attacks. This paper proposes a new method for adversarial attacks through watermarking. Our goal is to leverage the properties of adversarial samples to prevent people’s images from being maliciously collected and compared, thereby avoiding the leakage of private information. Our method, which improves on the multi-swarm particle swarm optimization (MPSO) algorithm, outperforms existing similar methods on two popular computer vision datasets. We conducted attack experiments on the widely used Imagenet dataset and achieve the highest attack success rate of 89.50%. The experimental results demonstrate the superiority of our method over existing similar methods. We simulate the attacks on the online social environment using two face photographs datasets and face recognition models. Our method reaches the best deception performance compared to similar methods, with the highest success rate of 97.03%, demonstrating our approach’s ability to protect individuals’ privacy. Furthermore, we investigate the natural causes of adversarial samples and demonstrate their inevitability, providing valuable insights for developing more robust deep models. The source code of the proposed method is available online at: https://github.com/grandwang/main_attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call