Abstract

Long-term person re-identification (Re-ID) aims to retrieve the same pedestrian captured by different cameras over a long-duration, which is faced with the challenge of changing clothes. Existing traditional person ReID methods always assume that pedestrians hardly change clothes and focus clothes-dependent identity feature, thus they cannot achieve ideal recognition performance if this assumption is untenable. To alleviate the influence of clothes-changing, this paper proposes a dual-attribute fusion network (DAFN) learning clothes-independent identity feature. In DAFN, the original RGB image, gray-scale image and contour image of a pedestrian are utilized as the input. With the help of our proposed clothes-independent self-attention modules (CSM), the discriminative clothes-independent identity feature can be extracted. At the same time, lightweight feature-enhanced self-attention modules (FSM) are designed in DAFN to improve the robustness of feature representation. Empirical studies show that the DAFN proposed in this paper achieves state-of-the-art performance on long-term person ReID benchmark.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.