Abstract

Recent years have witnessed a rapid advance in training and testing synthetic data through deep learning networks for the annotation of synthetic data that can be automatically marked. However, a domain discrepancy still exists between synthetic data and real data. In this paper, we address the domain discrepancy issue from three aspects: 1) We design a synthetic image generator with automatically labeled based on 3D scenes. 2) A novel adversarial domain adaptation model is proposed to learn robust intermediate representation free of distractors to improve the transfer performance. 3) We construct a distractor-invariant network and adopt the sample transferability strategy on global-local levels, respectively, to mitigate the cross-domain gap. Additional exploratory experiments demonstrate that the proposed model achieves large performance margins, which show significant advance over the other state-of-the-art models, performing a promotion of 10%–15% mAP on various domain adaptation scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call