Abstract
Finer-grained local features play a supplementary role in the description of pedestrian global features, and the combination of them has been an essential solution to improve discriminative performances in person re-identification (PReID) tasks. The existing part-based methods mostly extract representational semantic parts according to human visual habits or some prior knowledge and focus on spatial partition strategies but ignore the significant influence of channel information on PReID task. So, we proposed an end-to-end multi-branch network architecture (MCSN) jointing multi-level global fusion features, channel features and spatial features in this paper to better learn more diverse and discriminative pedestrian features. It is worth noting that the effect of multi-level fusion features on the performance of the model is taken into account when extracting global features. In addition, to enhance the stability of model training and the generalization ability of the model, the BNNeck and the joint loss function strategy are applied to all vector representation branches. Extensive comparative evaluations are conducted on three mainstream image-based evaluation protocols, including Market-1501, DukeMTMC-ReID and MSMT17, to validate the advantages of our proposed model, which outperforms previous state-of-the-art in ReID tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.