Abstract

Vehicle re-identification (reID) aims to search the target vehicle in a non-overlapping multi-camera network, which is important for the intelligent analysis in large scale of surveillance videos. Many existing methods have employed various techniques to achieve discriminative information. However, those methods always focus on the description of one view for the same vehicle images. Hence, a generated multiple sparse information fusion method is proposed for exploiting latent features from multi-views, which employs three different deep networks to extract multiple features from coarse to fine. And these features are regarded as multi-view features. Besides, to fuse these features reasonably, the paper transfers various features into a common space for better seeking distinctive features. Especially, besides ResNet, two feature learning networks are proposed to learn different features, respectively. One is designed to learn robust feature by dropping some features randomly when training the reID model. Another is to combine various salient features from different layers, which forms strong features for the reID task. Moreover, comprehensive experimental results have demonstrated that our proposed method can achieve competitive performances on benchmark datasets VehicleID and VeRi-776.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call