Abstract

The definition of vehicle viewpoint annotations is ambiguous due to human subjective judgment, which makes the cross-domain vehicle re-identification methods unable to learn the viewpoint invariance features during source domain pre-training. This will further lead to cross-view misalignment in downstream target domain tasks. To solve the above challenges, this paper presents a dual-level viewpoint-learning framework that contains an angle invariance pre-training method and a meta-orientation adaptation learning strategy. The dual-level viewpoint-annotation proposal is first designed to concretely redefine the vehicle viewpoint from two aspects (i.e., angle-level and orientation-level). An angle invariance pre-training method is then proposed to preserve identity similarity and difference across the cross-view; this consists of a part-level pyramidal network and an angle bias metric loss. Under the supervision of angle bias metric loss, the part-level pyramidal network, as the backbone, learns the subtle differences of vehicles from different angle-level viewpoints. Finally, a meta-orientation adaptation learning strategy is designed to extend the generalization ability of the re-identification model to the unseen orientation-level viewpoints. Simultaneously, the proposed meta-learning strategy enforces meta-orientation training and meta-orientation testing according to the orientation-level viewpoints in the target domain. Extensive experiments on public vehicle re-identification datasets demonstrate that the proposed method combines the redefined dual-level viewpoint-information and significantly outperforms other state-of-the-art methods in alleviating viewpoint variations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call