Deep learning has dramatically advanced computer vision tasks, including person re-identification (re-ID), substantially improving matching individuals across diverse camera views. However, person re-ID systems remain vulnerable to adversarial attacks that introduce imperceptible perturbations, leading to misidentification and undermining system reliability. This paper addresses the challenge of robust person re-ID in the presence of adversarial examples by estimating attack intensity to enable effective detection and adaptive purification. The proposed approach leverages the observation that adversarial examples in retrieval tasks disrupt the relevance and internal consistency of retrieval results, degrading re-ID accuracy. This approach estimates the attack intensity and dynamically adjusts the purification strength by analyzing the query response data, addressing the limitations of fixed purification methods. This approach also preserves the performance of the model on clean data by avoiding unnecessary manipulation while improving the robustness of the system and its reliability in the presence of adversarial examples. The experimental results demonstrate that the proposed method effectively detects adversarial examples and estimates the attack intensity through query response analysis. This approach enhances purification performance when integrated with adversarial purification techniques in person re-ID systems.
Read full abstract