Abstract

It is significant to evaluate the security of existing digital image tampering localization algorithms in real-world applications. In this paper, we propose an adversarial attack scheme to reveal the reliability of such deep learning-based tampering localizers, which would be fooled and fail to predict altered regions correctly. Specifically, two practical adversarial example methods are presented in a unified attack framework. In the optimization-based adversarial attack, the victim image forgery is treated as the parameter to be optimized via Adam optimizer. In the gradient-based adversarial attack, the invisible perturbation yielded by Fast Gradient Sign Method (FGSM) is added to the tampered image along gradient ascent direction. The black-box attack is achieved by relying on the transferability of such adversarial examples to different localizers. Extensive experiments verify that our attacks can sharply reduce the tampering localization accuracy while preserving high visual quality for attacked images. Source code is available at https://github.com/multimediaFor/AttackITL.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.