Abstract

Person re-identification (re-ID) seeks to match the identical individuals across different cameras and is still a challenging visual task due to substantial variances of person appearance in complex scenarios. Different from most of conventional person re-ID methods, which generally reduce person re-ID task to either a multi-view learning problem or a multi- domain learning problem alone, this paper treats such a task as a multi-view multi-domain (MVMD) learning problem to exploit the both benefits by refreshing canonical correlation analysis (CCA) with two improvements, termed as ranking-embedded transfer CCA (RTCCA). Specifically, to bridge the semantic gap between different views, we first embed a ranking weight matrix into CCA to strength the correlations among the multi-view images of the same identity and simultaneously to weaken that of different identities. Furthermore, we utilize the well-known distribution metric maximum mean discrepancy (MMD) as a regularization term to reduce the domain shift between training set and testing set. More importantly, the two improvements benefit from each other and the joint merit can further boost the re-ID performance. Experiments on three benchmarks verify the efficacy of the proposed RTCCA when compared with the recently representative baseline person re-ID methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call