Abstract

Person re-identification (Re-ID) is a challenging task due to variations in pedestrian images, especially in cross-domain scenarios. The existing cross-domain person Re-ID approaches extract the feature from single pedestrian image, but they ignore the correlations among pedestrian images. In this paper, we propose Heterogeneous Convolutional Network (HCN) for cross-domain person Re-ID, which learns the appearance information of pedestrian images and the correlations among pedestrian images simultaneously. To this end, we first utilize Convolutional Neural Network (CNN) to extract the appearance features for pedestrian images. Then we construct a graph in the target dataset where the appearance features are treated as the nodes and the similarity represents the linkage between the nodes. Afterwards, we propose Dual Graph Convolution (DGConv) to explicitly learn the correlation information from the similar and dissimilar samples, which could avoid the over-smoothing caused by the fully connected graph. Furthermore, we design HCN as a multi-branch structure to mine the structural information of pedestrians. We conduct extensive evaluations for HCN on three datasets, i.e. Market-1501, DukeMTMC-reID and MSMT17, and the results demonstrate that HCN is superior to the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.