Abstract

A proposed method, Enhancement, integration, and Expansion, aims to activate the representation of detailed features for occluded person re-identification. Region and context are two important and complementary features, and integrating them in an occluded environment can effectively improve the robustness of the model. Firstly, a self-enhancement module is designed. Based on the constructed multi-stream architecture, rich and meaningful feature interference is introduced in the feature extraction stage to enhance the model’s ability to perceive noise. Next, a collaborative integration module similar to cascading cross-attention is proposed. By studying the intrinsic interaction patterns of regional and contextual features, it adaptively fuses features across streams and enhances the diverse and complete representation of internal information. The module is not only robust to complex occlusions, but also mitigates the feature interference problem due to similar appearances or scenes. Finally, a matching expansion module that enhances feature discriminability and completeness is proposed. Providing more stable and accurate features for recognition. Compared with state-of-the-art methods on two occluded and holistic datasets, the proposed method is proved to be advanced and the effectiveness of the module is proved by extensive ablation studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call