Abstract

In multi-camera pedestrian tracking, pedestrian apparent features are usually used to solve the problem of cross-camera reidentification. In order to make the pedestrian apparent features robust to the directional changes of pedestrians in different cameras. In this paper, a pedestrian feature extraction method combining directional features is proposed by using deep network Resnet 50. Pedestrian feature extraction network is trained by adding pedestrian direction information. in order to improve the performance of person reidentification model, BNblock structure is added in the design, a batch normalization (BN) layer is added after the feature to get the normalized feature. It can make the triplet loss converge with the convergence of the ID loss, thereby improving the performance of the model. The proposed method was verified on DukeMTMC and Market-1501 data sets, and the results show that the pedestrian appearance features combined with direction information proposed by the pedestrian reidentification algorithm can significantly improve the reidentification accuracy. By adding BNblock into the network structure, the accuracy can be further improved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.