Abstract

Person re-identification (re-id) usually refers to matching people across disjoint camera views. Many existing methods focus on extracting discriminative features or learning distance metrics to make the intraclass distance smaller than interclass distances. These methods subconsciously assume that pedestrian images are well aligned. However, one major challenge in person re-id is the unconstrained spatial misalignment between image pairs due to view angle changes and pedestrian pose variations. To address this problem, in this paper, we propose Recurrent Matching Network of Spatial Alignment Learning (RMN-SAL) to simulate the human vision perception. Reinforcement learning is introduced to locate attention regions, since it provides a flexible learning strategy for sequential decision-making. A linear mapping is employed to convert the environment state into spatial constraint, comprising spatial alignment into feature learning. And recurrent models are used to extract information from a sequence of corresponding regions. Finally, person re-id is performed based on the global features and the features from the learned alignment regions. Our contributions are: 1) the recurrent matching network, which can subtly combine local feature learning and sequential spatial correspondence learning into an end-to-end framework; 2) the design of a location network, which is based on reinforcement learning and aims to learn task-specific sequential spatial correspondences for different image pairs through the local pairwise internal representation interactions. The proposed model is evaluated on three benchmarks, including Market-1501, DukeMTMC-reID and CUHK03, and achieves better performances than other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call