Abstract

Person re-identification (re-ID) aims at determining whether there is a specific person in image sets or videos via computer vision technology. State-of-the-art unsupervised re-ID methods extract image features through CNNs-based networks and store these extracted features in memory for identity matching. However, extracted global features of these methods ignore the problem of information redundancy and the influence of the constraints between the internal features. To overcome these problems, a Hybrid Partial-constrained Learning (HPcL) network with orthogonality regularization is proposed to learn a discriminative visual representation by generating hybrid features. Specifically, the hybrid features are generated by our designed Dynamic Fusion Module (DFM) to initialize the memory dictionary and match the identity, which can constrain each part of the features extracted by our proposed Multi-Scale (M-S) module and learn robust visual representations. In addition, a new orthogonal regularization method is introduced to constrain orthogonality of the kernel weights and features, which reduces the correlations among features. Extensive experimental results on Market-1501, DukeMTMC-reID, PersonX, and MSMT17 datasets demonstrate that our method is effective and superior to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call