Abstract

Person re-identification (re-id) aims to match people across disjoint camera views. The large viewpoint variation of pedestrian due to camera view changes decreases the accuracy of person re-id. However, the viewpoint variation has several relative stable correspondence patterns (e.g. front/back, front/side, side/back) because of the constraints from settled camera locations. In this paper, we propose a viewpoint correspondence based metric learning model to capture the intrinsic difference between two persons. First, we introduce a deep convolutional neural network to identify the different viewpoints of pedestrians. Then, the pedestrian pairs are grouped into several classes according to their different viewpoint correspondence patterns. Finally, the specific distance metrics are computed in these classes, respectively. Our contributions are (1) the classification of viewpoint correspondence pattern, (2) and the viewpoint-specific distance metric, which selects the optimal metric between two persons. The experimental results demonstrate that our method achieves the comparable performance versus the representative methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call