Abstract

Handwritten signature verification is used to verify the identity of individuals through recognizing their signatures. Adversarial examples can induce misclassification, hence posing a severe threat to signature verification. At present, a variety of adversarial example attacks have been developed for image classification, but they are not that useful for attacking signature verification due to two main reasons. First, adversarial perturbations are likely to be imposed on the background of signature images, making them perceptible to human eyes. Second, perfect knowledge about signature verification systems is actually unavailable to attackers. Therefore, how to generate effective and stealthy signature adversarial examples is still an open issue. To shed insights on this challenging problem, we propose the first black-box adversarial example attack against handwritten signature verification in this paper. Our method has two key designs. First, its perturbations are intentionally restricted to the foreground (i.e., strokes) of signature images, which reduces the risk of being recognized by humans. Second, a gradient-free method is developed to achieve the desired perturbations through iteratively updating their positions and optimizing their intensity. Extensive experiments confirm the three advantages of our method. First, the adversarial perturbations generated by our method are almost invisible, while those generated by existing methods are more well-marked. Second, our method defeats the state-of-the-art signature verification method with a surprisingly high success rate of 92.1%. Last, our method breaks through the defense of background cleaning, although this defense can deactivate almost all the existing adversarial example attacks towards signature verification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call