It is significant to evaluate the security of existing digital image tampering localization algorithms in real-world applications. In this paper, we propose an adversarial attack scheme to reveal the reliability of such deep learning-based tampering localizers, which would be fooled and fail to predict altered regions correctly. Specifically, two practical adversarial example methods are presented in a unified attack framework. In the optimization-based adversarial attack, the victim image forgery is treated as the parameter to be optimized via Adam optimizer. In the gradient-based adversarial attack, the invisible perturbation yielded by Fast Gradient Sign Method (FGSM) is added to the tampered image along gradient ascent direction. The black-box attack is achieved by relying on the transferability of such adversarial examples to different localizers. Extensive experiments verify that our attacks can sharply reduce the tampering localization accuracy while preserving high visual quality for attacked images. Source code is available at https://github.com/multimediaFor/AttackITL.