Abstract

Vehicle Re-Identification (Re-ID) is a challenging vision task mainly because the appearance of a vehicle varies dramatically under different viewpoints. Moreover, different vehicles with the same model and color commonly show similar appearance, thus are hard to be distinguished. To alleviate negative effects of viewpoint variance, we design a multi-view branch network where each branch learns a viewpoint-specific feature without parameter sharing. Being able to focus on a limited range of viewpoints, this viewpoint-specific feature performs substantially better than the general feature learned by an uniform network. To further differentiate visually similar vehicles, we strengthen the discriminative power on their subtle local differences by introducing a spatial attention model into each feature learning branch. The multi-view feature learning and spatial attention learning compose our neural network architecture, which is trained end to end with the softmax loss and triplet loss, respectively. We evaluate our methods on two large vehicle Re-ID datasets, i.e., VehicleID and VeRi-776, respectively. Extensive experiments show that our methods achieve promising performance. For example, we achieve mAP accuracy of 76.78% and 72.53% on VehicleID and VeRi-776 dataset respectively, substantially better than current state-of-the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call