Abstract

Deep cross-modal hashing has achieved excellent retrieval performance with the powerful representation capability of deep neural networks. Regrettably, current methods are inevitably vulnerable to adversarial attacks, especially well-designed subtle perturbations that can easily fool deep cross-modal hashing models into returning irrelevant or the attacker’s specified results. Although adversarial attacks have attracted increasing attention, there are few studies on specialized attacks against deep cross-modal hashing. To solve these issues, we propose a targeted adversarial attack method against deep cross-modal hashing retrieval in this paper. To the best of our knowledge, this is the first work in this research field. Concretely, we first build a progressive fusion module to extract fine-grained target semantics through a progressive attention mechanism. Meanwhile, we design a semantic adaptation network to generate the target prototype code and reconstruct the category label, thus realizing the semantic interaction between the target semantics and the implicit semantics of the attacked model. To bridge modality gaps and preserve local example details, a semantic translator seamlessly translates the target semantics and then embeds them into benign examples in collaboration with a U-Net framework. Moreover, we construct a discriminator for adversarial training, which enhances the visual realism and category discrimination of adversarial examples, thus improving their targeted attack performance. Extensive experiments on widely tested cross-modal retrieval datasets demonstrate the superiority of our proposed method. Also, transferable attacks show that our generated adversarial examples have well generalization capability on targeted attacks. The source codes and datasets are available at https://github.com/tswang0116/TA-DCH.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.