Abstract

Person reidentification aims at matching images of the same person across disjoint camera views, which is a challenging problem in multimedia analysis, multimedia editing, and content-based media retrieval communities. The major challenge lies in how to preserve similarity of the same person across video footages with large appearance variations, while discriminating different individuals. To address this problem, conventional methods usually consider the pairwise similarity between persons by only measuring the point-to-point distance. In this paper, we propose using a deep learning technique to model a novel set-to-set (S2S) distance, in which the underline objective focuses on preserving the compactness of intraclass samples for each camera view, while maximizing the margin between the intraclass set and interclass set. The S2S distance metric consists of three terms, namely, the class-identity term, the relative distance term, and the regularization term. The class-identity term keeps the intraclass samples within each camera view gathering together, the relative distance term maximizes the distance between the intraclass class set and interclass set across different camera views, and the regularization term smoothes the parameters of the deep convolutional neural network. As a result, the final learned deep model can effectively find out the matched target to the probe object among various candidates in the video gallery by learning discriminative and stable feature representations. Using the CUHK01, CUHK03, PRID2011, and Market1501 benchmark datasets, we extensively conducted comparative evaluations to demonstrate the advantages of our method over the state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.