Abstract
On specified scenarios, models trained on specific datasets (source domain) can generalize well to novel scenes (target domain) via knowledge transfer. However, these source detectors might not be perfectly aligned with a low target resource due to the imbalanced and inconsistent domain shift involved. In this paper, we propose a semi-supervised detector that adapts the domain shifts on both appearance and semantic levels. Based on this, two components are introduced as appearance adaptation networks with instance and batch normalization, and semantic adaptation networks where an adversarial transferring procedure is embedded by re-weighting the discriminator loss to improve the feature alignments between the two domains with imbalanced scales. Furthermore, a self-paced training procedure is performed to re-train the detector by alternately generating pseudo-labels in the target domain from easy to hard. In our experiments, an empirical analysis of the proposed framework is conducted by evaluating performance in various datasets such as Cityscapes and VOC0712, and the results verify the higher accuracy and effectiveness of the proposed detector in comparison with state-of-the-art detectors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.