Abstract

Benefiting from convolutional neural networks, person re-identification (Re-ID) has achieved great improvements with most state-of-the-art studies. However, person Re-ID in real scenarios is still confronted by intricate challenges, especially regarding the color problem. Scenarios in which different people wear clothes with the same or similar colors will make person Re-ID more difficult. In this work, we formulate a novel two-stream channel-exchanged multi-layer network (CENet), which can not only learn color-robust features to alleviate color interference but also essentially be regarded as a data augmentation method. For the first stream, we use the original RGB image as the input to extract global pedestrian features. Furthermore, we can obtain six images with different colors by exchanging the three channels of the RGB image, and then, we randomly select one image, which is fed to the second stream to alleviate the interference of colors. Moreover, by swapping RGB channels, we can obtain six types of images that still retain the same identities, increasing the training sets immensely. Extensive experiments conducted on three benchmark datasets, Market-1501, DukeMTMC-reID and CUHK03, demonstrate that our proposed method achieves the best Rank-1/mAP of 95.6%/88.1% and 89.2%/77.5% on the Market-1501 and DukeMTMC-reID datasets, respectively, outperforming current state-of-the-art methods significantly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call