Abstract

Person re-identification (re-ID) aims at matching two pedestrian images across different cameras. Usually, the main scheme of re-ID based on deep learning includes two phases: feature extraction and metric calculation. We focus on how to extract more discriminative image features for re-ID. To address this problem, we propose a multilevel deep representation fusion (MDRF) model based on the convolutional neural network. Specifically, the MDRF model is designed to extract image features at different network levels through one forward pass. In order to produce the final image representation, these multilevel features are fused by a fusion layer. Then the final image representation is fed into a combined loss of the softmax and the triplet to optimize the model. The proposed method not only utilizes the abstract information of high-level features but also integrates the appearance information of low-level features. Extensive experiments on public datasets including Market-1501, DukeMTMC-reID, and CUHK03 demonstrate the effectiveness of the proposed method for person re-ID.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.