Abstract

Person Re-identification (ReID) has witnessed remarkable improvements in the past couple of years. However, its applications in real-world scenarios are limited by the disparity among different cameras and datasets. In general, it remains challenging to generalize ReID algorithms from one domain to another, especially when the target domain is unknown. To solve this issue, we develop a 3D-guided adversarial transform (3D-GAT) network which explores the transfer ability of source training data to facilitate learning domain-independent knowledge. Being aware of a 3D model and human poses, 3D-GAT makes use of image-to-image translation to synthesize person images in different conditions whilst preserving features for identification as much as possible. With these augmented training data, it is easier for ReID approaches to perceive how a person can appear differently under varying viewpoints and poses, most of which are not seen in the training data, and thus achieve higher ReID accuracy especially in an unknown domain. Extensive experiments conducted on Market-1501, DukeMTMC-reID and CUHK03 demonstrate the effectiveness of our proposed approach, which is competitive to the baseline models in the original dataset and sets the new state-of-the-art in direct transfer to other datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.