Abstract

Vehicle Re-Identification is to find the same vehicle from images captured in different views under cross-camera scenarios. Traditional methods focus on depicting the holistic appearance of a vehicle, but they suffer from the hard samples with the same vehicle type and color. Recent works leverage the discriminative visual cues to solve this problem, where three challenges exist as follows. First, vehicle features are misaligned and distorted because of the viewpoint variance. Second, the discriminative visual cues are usually subtle, which is easy to be diluted by the large area of non-discriminative regions in subsequent average pooling modules. Third, these discriminative visual cues are dynamic for the same image when it compares with different vehicle images. To tackle the above problems, we project the vehicle images from 2D to 3D space and rotate them to the same view, and leverage the viewpoint aligned features to enhance the discriminative parts for vehicle ReID. In detail, our method consists of three sub-modules, 1) The 3D viewpoint alignment module restores the 3D information of the vehicle from a single vehicle image, and then rotates and re-renders it under fixed viewpoints. It enables fine-grained viewpoint alignment and relieves the distortion of the vehicle caused by the viewpoint variation. 2) The discriminative parts enhancement module performs feature enhancement guided by the prior distribution of distinctive parts. 3) The adaptive duplicated parts suppression module guides the network to focus on the most discriminative parts, which not only prevents the dilution of the high responses but also provides explainable evidence. The experimental results reveal our method achieves new state-of-the-art on large scale vehicle ReID dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call