Abstract

The person Re-identification (ReID) technology in real scenes is still a challenging task, especially when the target pedestrian is occluded. Existing occluded person ReID methods distinguish visible regions through pose information or semantic information. These methods focus on the body information in the visible region, which can perform well for ReID. However, since these approaches require the help of auxiliary networks, they are not easy to obtain efficient models. In addition, under the pose guidance, these networks focus only on the visible body parts and ignore other regional features. Therefore, we propose a Robust Feature Mining Transformer (RFMT) method to tackle the above problems of occluded person re-identification. Specifically, when dealing with occlusion, the proposed Residual Transformer (ResT) layer allows for a more efficient spread of image features through cross-layer connections, thereby reducing the loss of pedestrian information. Furthermore, obtaining complete pedestrian features becomes a big challenge in occluded scenes. Our proposed Global Attention (GAtt) enhances feature extraction by accurately focusing on the most representative part of images. And then, we design a hyperparametric centroid loss (Lctr) to catch stable features. Meanwhile, this loss can reduce the image search time during training and retrieval. Many experiments have demonstrated that our RFMT performs better than the state-of-the-art ReID methods on public occluded and holistic person re-identification datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call