Effective search engines based on deep neural networks (DNNs) can be used to search for many images, as is the case with the Google Images search engine. However, the illegal use of search engines can lead to serious compromises of privacy. Affected by various factors such as economic interests and service providers, hackers and other malicious parties can steal and tamper with the image data uploaded by users, causing privacy leakage issues in image hash retrieval. Previous work has exploited the adversarial attack to protect the user’s privacy with an approximation strategy in the white-box setting, although this method leads to slow convergence. In this study, we utilized the penalty norm, which sets a strict constraint to quantify the feature of a query image into binary code via the non-convex optimization process. Moreover, we exploited the forward–backward strategy to solve the vanishing gradient caused by the quantization function. We evaluated our method on two widely used datasets and show an attractive performance with high convergence speed. Moreover, compared with other image privacy protection methods, our method shows the best performance in terms of privacy protection and image quality.
Read full abstract