Abstract

Person re-identification (ReID) aims to associate the identity of pedestrians captured by cameras across non-overlapped areas. Video-based ReID plays an important role in intelligent video surveillance systems and has attracted growing attention in recent years. In this paper, we propose an end-to-end video-based ReID framework based on the convolutional neural network (CNN) for efficient spatio-temporal modeling and enhanced similarity measuring. Specifically, we build our descriptor of sequences by basic mathematical calculations on the semantic mid-level image features, which avoids the time consuming computations and the loss of spatial correlations. We further hierarchically extract image features from multiple intermediate CNN stages to build multi-level sequence descriptors. For a descriptor at one stage, we design an effective auxiliary pairwise loss which is jointly optimized with a triplet loss. To integrate hierarchical representation, we propose an intuitive yet effective summation-based similarity integration scheme to match identities more accurately. Furthermore, we extend our framework by a multi-model ensemble strategy, which effectively assembles three popular CNN models to represent walking sequences more comprehensively and improve the performance. Extensive experiments on three video-based ReID datasets show that the proposed framework outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call