Abstract
Metric learning has been widely studied in person re-identification (re-id). However, most existing metric learning methods only learn one holistic Mahalanobis distance metric for the concatenated high dimensional feature. This single metric learning strategy cannot handle complex nonlinear data structure and may easily encounter overfitting. Besides, feature concatenation is incapable of exploring the discriminant capability of different features and low dimensional features tend to be dominated by high dimensional ones. Motivated by these problems, we propose a multiple metric learning method for the re-id problem, where individual sub-metrics are separately learned for each feature type and the final metric is formed as weighted sum of the sub-metrics. The sub-metrics are learned with the Cross-view Quadratic Discriminant Analysis (XQDA) algorithm and the weights to each sub-metric are assigned in a two-step procedure. First, the importance of each feature type is estimated according to its discriminative power, which is measured in a query adaptive manner as related to the partial Area Under Curve (pAUC) scores. Then, the weights of all feature types are learned simultaneously with a maximum-margin based multi-task structural SVM learning framework, in order to make sure that relevant gallery images are ranked before irrelevant ones within all feature spaces. Finally, the sub-metrics are integrated with the learned weights in an ensemble model, generating a sophisticated distance metric. Experiments on the challenging i-LIDS, VIPeR, CAVIAR and 3DPeS datasets demonstrate the effectiveness of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.