Abstract
Person re-identification is an extremely challenging problem as person’s appearance often undergoes dramatic changes due to the large variations of viewpoints, illuminations, poses, image resolutions, and cluttered backgrounds. How to extract discriminative features is one of the most critical ways to address these challenges. In this paper, we mainly focus on learning high-level features and combine the low-level, mid-level, and high-level features together to re-identify a person across different cameras. Firstly, we design a Siamese inception architecture network to automatically learn effective semantic features for person re-identification in different camera views. Furthermore, we combine multi-level features in null space with the null Foley–Sammon transform metric learning approach. In this null space, images of the same person are projected to a single point, which minimizes the intra-class scatter to the extreme and maximizes the relative inter-class separation simultaneously. Finally, comprehensive evaluations demonstrate that our approach achieves better performance on four person re-identification benchmark datasets, including Market-1501, CUHK03, PRID2011, and VIPeR.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.