Abstract
Visual language pre-training (VLP) models have demonstrated significant success in various domains, but they remain vulnerable to adversarial attacks. Addressing these adversarial vulnerabilities is crucial for enhancing security in multi-modal learning. Traditionally, adversarial methods that target VLP models involve simultaneous perturbation of images and text. However, this approach faces significant challenges. First, adversarial perturbations often fail to translate effectively into real-world scenarios. Second, direct modifications to the text are conspicuously visible. To overcome these limitations, we propose a novel strategy that uses only image patches for attacks, thus preserving the integrity of the original text. Our method leverages prior knowledge from diffusion models to enhance the authenticity and naturalness of the perturbations. Moreover, to optimize patch placement and improve the effectiveness of our attacks, we utilize the cross-attention mechanism, which encapsulates inter-modal interactions by generating attention maps to guide strategic patch placement. Extensive experiments conducted in a white-box setting for image-to-text scenarios reveal that our proposed method significantly outperforms existing techniques, achieving a 100% attack success rate.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have