Abstract

Learning fine-grained features is crucial to the performance improvement of person re-identification (Re-ID). Although existing methods have made significant progress, utilizing multi-level information to obtain fine-grained features has not been explored in this field. To alleviate this issue, we propose a lightweight person Re-ID method named Joint Multi-Level Feature Network (JMLFNet) to obtain robust feature representation for the Re-ID task. Specifically, we design a Multi-Attention Block (MAB) and embed it into the lightweight backbone network to improve performance, which can make the network focus on the key parts of pedestrian images. Meanwhile, we propose a Multi-Level Feature Extraction (MLFE) method to extract multi-granularity features of high-level semantic information and low-level detail information, which can effectively capture the feature diversity of pedestrian images. Furthermore, we design a Feature Fusion Block (FFB), which is fused the fine-grained features of high-level and low-level information to better obtain the discriminative feature representation of pedestrian images. Extensive experiments conducted on popular datasets Market1501 and DukeMTMC-reID demonstrate that the proposed JMLFNet has competitive performance compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call