Abstract

Person re-identification (Re-ID) is a challenging but significant research topic. Previous methods based on global features are not pay much attention to the fine-grained feature information. While the methods based on local features strictly rely on the accurate pedestrian detection bounding-boxes to align the image pairs. In this paper, the Multi-Scale Feature Fusion Network is proposed which can extract global and local features simultaneously and to learn jointly. The network structure consists of two branches. One is a global branch for global feature learning, in this branch, we extract the whole body feature information of human body to express the pedestrian image as a whole. The other is a local branch for local feature learning. In the local branch, according to the characteristics of pedestrian’s human structure, we segment the image into 6 horizontal strips to bring the local feature representation of the pedestrian image. In addition, we use a novel integration of multiple loss functions to further improve the recognition accuracy of the network. Experiments on Market-1501 and DukeMTMC-reID datasets show that our proposed method can achieve state-of-the-art results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call