Abstract

Spatial misalignment caused by variations in poses and viewpoints is one of the most critical issues that hinder the performance improvement in existing person re-identification (Re-ID) algorithms. Although it is straightforward to explore correspondence learning algorithms for alignment, online learning is intractable for negative pairs due to the intrinsic visual difference between negative pairs and efficiency concern. To address this problem, in this paper, we present a robust and efficient graph correspondence transfer (REGCT) approach for explicit spatial alignment in Re-ID. Specifically, we propose the off-line correspondence learning and on-line correspondence transfer framework. During training, patch-wise correspondences between positive training pairs are established via graph matching. By exploiting both spatial and visual contexts of human appearance in graph matching, meaningful semantic correspondences can be obtained. During testing, the off-line learned patch-wise correspondence templates are transferred to test pairs with similar pose-pair configurations for local feature distance calculation. To enhance the robustness of correspondence transfer, we design a novel pose context descriptor to accurately model human body configurations, and present an approach to measure the similarity between a pair of pose context descriptors. Meanwhile, to improve testing efficiency, we propose a correspondence template ensemble method using the voting mechanism, which significantly reduces the amount of patch-wise matchings involved in distance calculation. With the aforementioned strategies, the REGCT model can effectively and efficiently handle the spatial misalignment problem in Re-ID. Extensive experiments on five challenging benchmarks, including VIPeR, Road, PRID450S, 3DPES, and CUHK01, evidence the superior performance of REGCT over other state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.