Abstract

Domain adaptation for object detection has been extensively studied in recent years. Most existing approaches focus on single-source unsupervised domain adaptive object detection. However, a more practical scenario is that the labeled source data is collected from multiple domains with different feature distributions. The conventional approaches do not work very well since multiple domain gaps exist. We propose a Multi-source domain Knowledge Transfer (MKT) method to handle this situation. First, the low-level features from multiple domains are aligned by learning a shallow feature extraction network. Then, the high-level features from each pair of source and target domains are aligned by the followed multi-branch network. After that, we perform two parts of information fusion: (1) We train a detection network shared by all branches based on the transferability of each source sample feature. The transferability of a source sample feature means the indistinguishable degree to the target domain sample features. (2) For using our model, the target sample features output by the multi-branch network are fused based on the average transferability of each domain. Moreover, we leverage both image-level and instance-level attention to promote positive cross-domain transfer and suppress negative transfer. Our main contributions are the two-stage feature alignments and information fusion. Extensive experimental results on various transfer scenarios show that our method achieves the state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call