Abstract

Person re-identification (ReID) seeks to identify the same individual across different cameras by matching their corresponding images. The current ReID datasets are limited in size and diversity, especially in terms of clothing changes, making traditional techniques vulnerable to appearance variations. Further, current approaches rely heavily on appearance features for discrimination, which is unreliable when a person’s appearance changes. We hypothesize that the ReID accuracy can be enhanced by training the ReID model on a large volume of diversified training data and combining multiple features for discrimination. We use the image channel shuffling data augmentation method, producing a large volume of diversified training data. Also, a two-stream visual and spatial-temporal method is proposed to learn the feasible features for appearance change scenarios. The appearance features obtained from the visual stream are combined with spatio-temporal information to discriminate between two people. The proposed approach is evaluated for its robustness on short-term and on long-term datasets. The presented two-stream approach outperforms earlier methods by achieving Rank1 accuracy of 98.6% on Market1501, 95.52% on DukeMTMC-reID, 76.21% on LTCC, and 91.76% on VC-Clothes, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.