Abstract
Vehicle re-identification (Re-ID) aims to retrieve vehicles across non-overlapping cameras. Most studies consider representation learning from single appearance information of the vehicle images. Some works adopt the spatio-temporal information to remove unreasonable vehicles to refine the results in the testing phase. However, they ignore the potential topological relations among cameras under the Closed Circuit Television (CCTV) camera systems in the training phase, which usually leads to suboptimal results due to the high intra-identity variations. To handle this problem, we propose a novel vehicle re-identification framework, which explicitly models the camera topological relations of all input images to aggregate neighbor images and thus acquires camera-independent representations. Specifically, we first construct a Camera Topology Graph (CTG) to elucidate the topological relations among cameras. It takes different cameras as nodes and constructs edges from four levels of the camera system, position, orientation, and individual. Then, we introduce a Camera Topology-based Graph Convolutional Network (CT-GCN), which suppresses irrelevant neighbor images and learns different camera representation functions. Finally, we propose a topological cross-entropy loss to obtain the more discriminative vehicle representations. The whole network is trained in an end-to-end manner. Extensive experiments on three benchmark datasets demonstrate the effectiveness of the proposed method against state-of-the-art vehicle Re-ID methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.