Abstract

Person re-identification (re-id) aims to match a certain person across multiple non-overlapping cameras. It is a challenging task because the same person's appearance can be very different across camera views due to the presence of large pose variations. To overcome this issue, in this paper, we propose a novel unified person re-id framework by exploiting person poses and identities jointly for simultaneous person image synthesis under arbitrary poses and pose-invariant person re-identification. The framework is composed of a GAN based network and two Feature Extraction Networks (FEN), and enjoys following merits. First, it is a unified generative adversarial model for person image generation and person re-identification. Second, a pose estimator is utilized into the generator as a supervisor in the training process, which can effectively help pose transfer and guide the image generation with any desired pose. As a result, the proposed model can automatically generate a person image under an arbitrary pose. Third, the identity-sensitive representation is explicitly disentangled from pose variations through the person identity and pose embedding. Fourth, the learned re-id model can have better generalizability on a new person re-id dataset by using the synthesized images as auxiliary samples. Extensive experimental results on four standard benchmarks including Market-1501 [69], DukeMTMC-reID [40], CUHK03 [23], and CUHK01 [22] demonstrate that the proposed model can perform favorably against state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call