Abstract

Learning distinguishing features from training datasets while filtering features of occlusions is critical to person retrieval scenarios. Most of the current person re-identification (Re-ID) methods based on classification or deep metric representation learning tend to overlook occlusion issues on the training set. Such representations from obstacles are easily over-fitted and misleading due to being considered as a part of the human body. To alleviate the occlusion problem, we propose a pose-guided feature region-based fusion network (PFRFN), to utilize pose landmarks as guidance to guide local learning for a good property of local feature, and the representation learning risk is evaluated on each part loss separately. Compared with only using global classification loss, concurrently considering local loss and the results of robust pose estimation enable the deep network to learn the representations of the body parts that prominently displayed in the image and gain the discriminative faculties on occluded scenes. Experimental results on multiple datasets, i.e., Market-1501, DukeMTMC, CUHK03, demonstrate the effectiveness of our method in a variety of scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call