Abstract

Existing person re-identification methods mainly rely on thevisualappearance captured by camerasfor identity matching.However, due to the sensitivity of visual data to occlusion, blur, clothing change, etc.,existing methodsstruggleto distinguish pedestriansin challenging scenarios. Inspired by the fact that most pedestrians carry around smart wireless devices, e.g. mobile phones that can be sensed by WiFi or cellular networks as wireless positioning signals, we propose to exploit the free yet informative wireless signalsto assist person re-identification. It is well recognized thatwireless signals are robust to visual noisesmentioned above, which perform as a good complement to the visual data. To make full use of these multi-modal clues for person re-identification, we propose a multi-modal context propagation framework MCPF that contains a recurrent context propagation module RCPM and an unsupervised multi-modal cross-domain method UMM-ReID. RCPM enables context information to be continuously propagatedand fusedbetween visual data and wireless data. UMM-ReID utilizes wireless signals to constrain the estimation of pseudo labels. To evaluate our approach, we contribute a new wireless positioning person re-identification (WP-ReID) dataset. Extensive experimentsdemonstrate the effectiveness of the proposed method. Benefitingfrom the collaboration of RCPM and UMM-ReID, the proposed framework MCPF achieves a significant performance improvement over existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call