Abstract

The wide application of deep neural networks (DNNs) has significantly improved the performance of hashing models on multimodal retrieval issues. DNN-based deep models can automatically learn semantic features from raw data to make human-level decisions. However, the superior generalization leads to potential privacy leakage risks. Strong DNN-based retrieval models enable malicious crawlers to search for nontag private information based on semantic similarity matching. Hence, executing effective privacy protection mechanisms against those retrieval software is essential for reliable social website construction. In this article, we propose a retrieval task-based adversarial perturbation generation method called Hashing Fake to meet this request. Specifically, DNNs are recently found to be vulnerable to a specific set of attacks called adversarial perturbations, which denote some magnitude-restricted signals added on objective samples to misguide well-crafted DNN models, and perturbations’ magnitudes are small enough that will not induce humans’ perception. Moreover, since existing adversarial perturbation generation methods are designed for supervised tasks, Hashing Fake constructs a differential approximation substitution for perturbation production on unsupervised retrieval tasks. Through extensive experiments on several deep retrieval benchmarks, we demonstrate that well-crafted perturbations using Hashing Fake can effectively misguide objective models’ recognitions to make false predictions. The added norm-restricted perturbations on objective samples will not alter humans’ perception; hence, Hashing Fake can be applied on real-world social websites to protect subscribers’ privacy against malicious retrieval software.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call