Abstract

Video-based person re-identification (Re-ID) has drawn more attention as video surveillance could offer richer spatial and temporal information to potentially reduce visual ambiguities and occlusion. For visual ambiguities, multi-scale features are beneficial for distinguishing similar pedestrian sequences by different semantic information. For occlusion, Graph Convolutional Network (GCN) could effectively leverage the complementary information with node pairs for Re- ID task. In this paper, we propose a novel Multi-Scale Representation with Graph Learning (MSR-GL) network consisting of three branches: global branch, shallow branch and graph branch. The global branch and shallow branch extract multi-scale features from different layers of CNN backbone. Specially, an extra Bottleneck module is introduced for shallow feature maps, in which the parameters are independent with other branches. For graph branch, the adjacency relationships are dynamically modeled through a temporal-spatial symmetrical transformation between nodes. Then, the node features are updated by adjacency matrix and aggregated to video-level graph features. We conduct extensive experiments on three widely-adopted benchmarks (i.e. MARS, DukeMTMC-VideoReID and iLIDS-VID). Results show that we achieve the superior results compared with several recent state-of-the-art methods with 90.28% rank1 and 85.20% mAP on MARS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call