Abstract

The core of person re-identification (Re-ID) lies in robustly estimating similarities for each probe-gallery image pair. A common practice in existing works is to calculate the similarity of each image pair independently, ignoring relations between different probe-gallery pairs. In this paper, we present a deep learning conditional random field (Deep-CRF) graph to model group-wise similarities within a batch of images, and regard the Re-ID task as a CRF node labeling problem. Unlike the existing deep CRF based approach where the CRF inference is only involved in the training stage, our method intends to fully exploit the potential of CRF model, exhibiting inference consistency in both training and testing. Specifically, we design unary potentials for computing each probe-gallery similarity separately. To efficiently encode relationships between different probe-gallery pairs, pairwise potentials are built on an arbitrary node pair whose learning is achieved by a joint matching strategy using bidirectional LSTM. We pose the CRF inference as a RNN learning process, where unary and pairwise potentials are jointly optimized in an end-to-end manner. Extensive experiments on three large-scale person Re-ID datasets demonstrate the effectiveness of the proposed method. Our Deep-CRF achieves the best results compared with the previous graph-based deep learning approaches and substantially exceeds the existing deep CRF framework by 8% in Rank1 accuracy on CUHK03 dataset. It also behaves competitive among the current state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.