Abstract
Most existing approaches for person re-identification are designed in a supervised way, undergoing a prohibitively high labeling cost and poor scalability. Besides establishing effective similarity distance metrics, these supervised methods usually focus on constructing discriminative and robust features, which is extremely difficult due to the significant viewpoint variations. To overcome these challenges, we propose a novel unsupervised method, termed as Robust Dictionary Learning with Graph Regularization (RDLGR), which can guarantee view-invariance through learning a dictionary shared by all the camera views. To avoid the significant degradation of performance caused by outliers, we employ a capped l 2,1-norm based loss to make our model more robust, addressing the problem that traditional quadratic loss is known to be easily dominated by outliers. Considering the lack of labeled cross-view discriminative information in our unsupervised method, we further introduce a cross-view graph Laplacian regularization term into the framework of dictionary learning. As a result, the geographical structure of original data space can be preserved in the learned latent subspace as discriminative information, making it possible to further boost the matching accuracy. Extensive experimental results over four widely used benchmark datasets demonstrate the superiority of the proposed model over the state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.