Person re-identification (Re-ID), which is widely used in surveillance and tracking systems, aims to search individuals as they move between different camera views by maintaining identity across various camera views. In the realm of person re-identification (Re-ID), recent advancements have introduced convolutional neural networks (CNNs) and vision transformers (ViTs) as promising solutions. While CNN-based methods excel in local feature extraction, ViTs have emerged as effective alternatives to CNN-based person Re-ID, offering the ability to capture long-range dependencies through multi-head self-attention without relying on convolution and downsampling. However, it still faces challenges such as changes in illumination, viewpoint, pose, low resolutions, and partial occlusions. To address the limitations of widely used person Re-ID datasets and improve the generalization, we present a novel person Re-ID method that enhances global and local information interactions using self-attention modules within a ViT network. It leverages dynamic pruning to extract and prioritize essential image patches effectively. The designed patch selection and pruning for person Re-ID model resulted in a robust feature extractor even in scenarios with partial occlusion, background clutter, and illumination variations. Empirical validation demonstrates its superior performance compared to previous approaches and its adaptability across various domains.
Read full abstract