Abstract

Image fusion has achieved significant success, owing to the rapid development of digital computing and Generative Adversarial Networks (GANs). GAN-based fusion techniques fuse latent codes through spatial or arithmetic operations to achieve real image fusion, facilitated by encoders. However, security concerns have arisen due to the vulnerability of deep neural networks to adversarial perturbations. This vulnerability extends to the GAN-based image fusion task. In this paper, we introduce two methods for creating adversarial examples in the context of GAN-based image fusion: adding subtle perturbations to input images or applying the adversarial patch to the input images. The subtle perturbation is meticulously crafted to be imperceptible yet capable of misleading a specific output, regardless of the input images. Conversely, the adversarial patch is a universal perturbation that, when applied to input images as a patch, induces meaningless output images. Our comprehensive experiments, conducted on datasets such as FFHQ (3 × 1024 × 1024) and Stanford Cars (3 × 512 × 512), include both quality and quantity evaluations. According to our experimental results, the subtle perturbation results in almost identical output images, while the adversarial patch induces meaningless fused images and can be transferred to other datasets. By demonstrating that image fusion models are highly vulnerable to adversarial attacks, this study highlights serious concerns regarding the security of these models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call