Abstract

With the need for criminal investigation technology and the development of deep learning, the task of person re-identification has gradually become a research hotspot. Recently, various neural network-based person re-identification technologies designed by researchers have shown excellent results. However, most of the frameworks focus on complex structural design or redundant networks to guide model construction, which hugely increases the cost of train and application cost. In addition, the correlation between the channel information and spatial information on the pedestrian feature map is also relatively lacking. Therefore, we design a lightweight attention module to address the lack of correlation question response. The proposed module sequentially extracts person images’ channel and spatial features and effectively associates the two kinds of information through sequential connections. The proposed attention module has a simple structure, and the parameter increase in the backbone network is tiny. We place the fuse module in each feature extraction layer to focus on the pedestrian information extracted by each layer. To solve the problem of complex model structure, we choose the residual network as the backbone network and the attention mechanism to extract person features without using pose point estimation or additional network assistance to reduce model complexity. We adjust the drop rate of the person classification layer to improve the model’s generalization ability. We estimate the performance of our method on three public datasets: Market-1501, DukeMTMC-reID, and CUHK03 (both detected and labeled) demonstrate the proposed method’s effectiveness and obtain highly competitive performance on the three datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.