Abstract
Person search aims to localize the queried person from a gallery of uncropped, realistic images. Unlike re-identification (Re-ID), person search deals with the entire scene image containing rich and diverse visual context information. However, existing works mainly focus on the person’s appearance while ignoring other essential intra- and inter-image context information. To comprehensively leverage the intra- and inter-image context, we propose a unified framework termed MI3C including the Intra-image Multi-View Context network (IMVC) and the Inter-image Group Context Ranking algorithm (IGCR). Concretely, the IMVC integrates the features from the scene, surrounding, instance, and part views collaboratively to generate the final ID feature for person search. Furthermore, the IGCR algorithm employs group matching results between query and gallery image pairs to measure the holistic image matching similarity, which is adopted as part of the sorting metric to yield a more robust ranking among the whole gallery. Extensive experiments on two popular person search benchmarks demonstrate that by mining intra- and inter-image context, our method outperforms previous state-of-the-art methods by conspicuous margins. Specifically, we achieve 96.7% mAP and 97.1% top-1 accuracy on the CUHK-SYSU dataset, 55.6% mAP, and 90.8% top-1 accuracy on the PRW dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.