Abstract

Pedestrian re-identification is to identify the target interested from pedestrian images taken by multiple cameras. Recently, the ReID (Person re-identification) algorithm has shown that the local features of pedestrians are used to describe various parts of the body, the global features of pedestrians are used to represent the overall information, and the local features of relationships are used to make certain connections between local features to form more discriminative features. Although these algorithms have a certain effect on pedestrian re-identification, their recognition accuracy is still not satisfactory. To solve these problems, we propose a novel multi-feature extraction fusion model (MFEFM). It can extract three different features of pedestrian images at the same time and merge them together to form a more discriminative feature. First, use ResNet-50 as the infrastructure to extract basic features. Then, global maximum pooling (GMP) is used to extract local features of pedestrian images, global average pooling (GAP) is used to extract global features, and pose-estimator is used to extract key point features in parallel. Finally, we use the relationship network to form connected local features and key point features, and then connect these three features together.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call