Abstract

In real scenes, the occluded situation caused by different factors is the key to the person re-identification (Re-ID) problem. Many methods retrieve the corresponding images in the gallery by employing feature matching. However, the occluded region is still not well addressed, and the noise generated inevitably increases the difficulty of Re-ID. Therefore, we propose a Self-Similarity guided Probabilistic Embedding Matching (SSPEM) method to solve the problem of occluded person Re-ID. Specifically, we first design a Feature Similarity Enhancement (FSE) module to perform self-similarity calculation based on the extracted features of the same pedestrians by Vision Transformer (ViT) and complete feature enhancement by suppressing irrelevant information. Then, we design a probabilistic embedding matching (PEM) module to represent the features as probability distributions in a common embedding space, which is able to learn more feature structures. The uncertainty for the occluded region can maximize the metric computation to obtain features with better matching. Extensive experiments on five challenging datasets for occluded and holistic person Re-ID tasks are conducted, and the results show the effectiveness of our proposed SSPEM method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call