Abstract
In this paper, we investigate the problem of video-based person re-identification (re-id) which matches people’s video clips across non-overlapping camera views at different time. A key challenge of video-based person re-id is a person’s appearance and motion would always display differently and take effects unequally at disjoint camera views due to the change of lighting, viewpoint, background and etc., which we call the “view-bias” problem. However, many previous video-based person re-id approaches have not quantified the importance of different types of features at different camera views, so that the two types of important features (i.e. appearance and motion features) do not collaborate effectively and thus the “view-bias” problem remains unsolved. To address this problem, we propose a Deep Asymmetric Metric learning(DAM) method that embeds a proposed asymmetric distance metric learning loss into a two-stream deep neural network for jointly learning view-specific and feature-specific transformations to overcome the “view-bias” problem in video-based person re-id. As learning these view-specific transformations become expensive when there are large amount of camera views, a clustering-based DAM method is developed to make our DAM scalable. Extensive evaluations have been carried out on three public datasets: PRID2011, iLIDS-VID and MARS. Our results verify that learning view-specific and feature-specific transformations are beneficial, and the presented DAM has empirically performed more effectively overall for video-based person re-id on challenging benchmarks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.