Abstract

Maliciously forged images generated by image translation networks can cause significant security threats to personal privacy and national security. An emerging solution for forged images is preventing image forgery models from tampering with user images through adversarial attacks. Currently, conventional adversarial generation algorithms use random noise as the starting point, which makes the final adversarial output similar to the original output but does not prevent image tampering. The output-correlated initialization is applied to improve the adversarial attack algorithm for the image translation network and improve the visual effect of adversarial attacks. Moreover, a comparative experiment is performed on multiple loss functions, and the loss function with the best performance is selected as the adversarial loss function to complete the adversarial attack on the image translation network. The selected initialization method makes the search process of adversarial examples more comprehensive and makes the generation results of adversarial examples more diverse. The analysis of the visual effects of the attack reveals how the proposed adversarial attack methods affect the forgery results of different image translation frameworks and generate more chaotic images. Comparison of multiple indicators demonstrates that the proposed method has a high attack success rate and expands the image distance between the adversarial output and the original output, thereby improving the attack efficiency, preventing malicious tampering with the image, and protecting the user image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call