Abstract

Many vehicle re-identification (Re-ID) problems require the robust recognition of vehicle instances across multiple viewpoints. Existing approaches for dealing with the vehicle re-ID problem are insufficiently robust because they cannot distinguish among vehicles of the same type nor recognize high-level representations in deep networks for identical vehicles with various views. To address these issues, this paper proposes a viewpoint adaptation network (VANet) with a cross-view distance metric for robust vehicle Re-ID. This method consists of two modules. The first module is the VANet with cross-view label smoothing regularization (CVLSR), which abstracts different levels of a vehicle’s visual patterns and subsequently integrates multi-level features. In particular, CVLSR based on color domains assigns a virtual label to the generated data to smooth image-image translation noise. Accordingly, this module supplies the viewing angle information of the training data and provides strong robust capability for vehicles across different viewpoints. The second module is the cross-view distance metric, which designs a cascaded cross-view matching approach to combine the original features with the generated ones, and thus, obtain additional supplementary viewpoint information for the multi-view matching of vehicles. Results of extensive experiments on two large scale vehicle Re-ID datasets, namely, VeRi-776 and VehiclelD demonstrate that the performance of the proposed method is robust and superior to other state-of-the-art Re-ID methods across multiple viewpoints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call