Abstract

AbstractPerson re-identification (ReID) systems, those based on deep neural networks, have been shown their vulnerability to adversarial examples, i.e. images that only added slight perturbations. In previous defense methods, those input transformations based are easy to reduce the recognition accuracy, while those adversarial examples detection based only prevent the possible adversarial samples from entering the system rather than really improving the system’s robustness. In this paper, we aim to construct a robust person ReID model, which not only can defend against the adversarial attack, but also maintains the recognition accuracy. We combine the advantages of adversarial example detection and adversarial training in the model implementation. On the one hand, we propose a novel adversarial examples detection method based on perturbation information. This method not only has high detection accuracy, but also can purify the adversarial examples through a simple perturbation removal operation. On the other hand, we propose an adversarial example generation method for the matching problem, and use the adversarial examples generated by this method to extract the perturbation. That is, we train a perturbation extractor in a way like adversarial training. Experiments show the effectiveness of our method. For example, when facing Deep Mis-Ranking attack, the strongest attacker at present, our model’s accuracy is greatly improved from 36.68% to 73.97% compared to previous state-of-the-art defense model. In addition, our defense method can be deployed as a plug-and-play defense solution to protect ReID systems.KeywordsPerson re-identificationAdversarial examples detectionAdversarial defensePerturbation extract

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call