Abstract

Multi-focus image fusion aims to create an all-in-focus clear image by fusing a set of partially focused images. In recent years, various multi-focus image fusion methods based on deep learning have been proposed, but there is no thorough study evaluating their robustness for adversarial attacks. In this paper, we investigate the robustness of deep learning based multi-focus image fusion models to adversarial attacks. First, we generated adversarial examples which can significantly reduce the fusion quality of image fusion models. Then, a metric defocus attack intensity (DAI) was proposed to quantitatively evaluate the robustness of different models for adversarial attacks. At last, we analyzed the factors affecting model robustness, including model size and post-processing steps. Besides, we successfully attacked recent image fusion models in the black-box scene by utilizing the transferability of adversarial examples. Experimental results show that state-of-the-art image fusion models are also vulnerable to adversarial attacks, and some observations in image classifier robustness studies are not transferable to image fusion task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call