Abstract

Matching manually cropped pedestrian images between queries and candidates, termed as person re-identification, has achieved significant progress with deep convolutional neural networks. Recently, a topic called ‘person search’ is proposed for the end-to-end application of re-identification technologies. It integrates object detection and person re-identification and aims to both locate and match pedestrians on a gallery of raw images. However, the design and implementation of such kind of hybrid network are difficult and computationally consuming in real practical situations. In order to fasten the design and ease the implementation, this paper proposes a deep-frozen transfer learning framework, named FT-MDnet, to extract re-identification features from a pre-trained detection network in two steps. First, using a channel-wise attention mechanism, a network called adaptive transfer learning network (ATLnet) is used to convert the sharing data of the underlying detection network to a re-identification feature map. Then, a multi-branch feature representation network called multiple descriptor network (MDnet) is proposed to extract re-identification features from the re-identification feature map. Our proposed solution has been verified on different types of mainstream detection networks, including YOLOv3, YOLOv4, Mask RCNN, and CenterNet. The experimental results show that our solution outperforms all other person search solutions by a large margin. It proves that the feature representations of detection networks are highly compatible with re-identification, and the proposed framework effectively extracts these features out. To encourage further research, we have made our framework open source.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.