Abstract

Vehicle re-identification (ReID) with viewpoint variations is an interesting but challenging task in computer vision. Most existing vehicle ReID approaches focus on the original single view, which requires vehicle features in varying views. However, this approach limits the models’ discriminative capabilities in realistic scenarios due to the lack of visual information in arbitrary views. In this paper, we propose a multi-view generative adversarial network (MV-GAN) that can synthesize real vehicle images conditioned on arbitrary skeleton views. MV-GAN is designed specifically for viewpoint normalization in vehicle ReID. Based on the generated images, we can infer a multi-view vehicle representation to learn distance metrics for vehicle ReID from the original images that is free of the influence of viewpoint variations. We show that the features of the generated images and the original images are complementary. We demonstrate the validity of the proposed method through extensive experiments on the VeRi, VehicleID, and VRIC datasets and show the superiority of multi-view image generation for improving vehicle ReID through comparisons with the state-of-the-art algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call