Abstract

Vehicle re-identification (V-ReID) aims at discovering an image of a specific vehicle from a set of images typically captured by different cameras. Vehicles are one of the most important objects in cross-camera target recognition systems, and recognizing them is one of the most difficult tasks due to the subtle differences in the visible characteristics of vehicle rigid objects. Compared to various methods that can improve re-identification accuracy, data augmentation is a more straightforward and effective technique. In this paper, we propose a novel data synthesis method for V-ReID based on local-region perspective transformation, transformation state adversarial learning and a candidate pool. Specifically, we first propose a parameter generator network, which is a lightweight convolutional neural network, to generate the transformation states. Secondly, an adversarial module is designed in our work, it ensures that noise information is added as much as possible while keeping the labeling and structure of the dataset intact. With this adversarial module, we are able to promote the performance of the network and generate more proper and harder training samples. Furthermore, we use a candidate pool to store harder samples for further selection to improve the performance of the model. Our system pays more balanced attention to the features of vehicles. Extensive experiments show that our method significantly boosts the performance of V-ReID on the VeRi-776, VehicleID and VERI-Wild datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call