Abstract

Recently, security protection is import in many scenarios. Occluded person re-identification (Re-ID) involves identifying obscured pedestrians from images captured by multiple cameras, even when the images are partially or fully occluded. Many state-of-the-art models for occluded Re-ID utilize auxiliary modules such as pose estimation, feature pyramid, and graph matching to address occlusion challenges. However, this approach results in complex models that struggle to generalize to diverse occlusions and may not effectively handle non-occluded pedestrians. Furthermore, real-world Re-ID applications frequently involve both occluded and non-occluded pedestrians, making it difficult to develop versatile models. To tackle these issues, we introduce a novel Re-ID model that learns discriminative features on both local and global scales for occluded pedestrian identification. Our proposed model, the Local-aware Transformer (LAT) for occluded person Re-ID, comprises three modules: a Discriminative Feature Extraction Module (DFEM), a Local Feature Extraction Module (LFEM), and a Global Feature Extraction Module (GFEM). Our experimental results on three occluded and two general Re-ID benchmarks demonstrate that our model surpasses existing state-of-the-art methods and achieves exceptional performance in both occluded and non-occluded Re-ID tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call