Abstract

Unsupervised domain adaptation (UDA) is a technique for relieving domain shifts via transferring relevant domain knowledge from the full-labeled source domain to an unlabeled target domain. While tremendous advances have been witnessed recently, the adoption of deep CNN-based UDA methods in real-world scenarios is still constrained by low-resource computers. Most prior strategies either handle domain shift problems via UDA or compress CNNs using knowledge distillation (KD), we seek to implement the model on constrained-resource devices to learn domain adaptive knowledge without sacrificing accuracy. In this paper, we proposed a three-step Progressive Cross-domain Knowledge Distillation (PCdKD) paradigm for efficient unsupervised adaptive object detection, since directly alleviating the significant discrepancy across domains could result in unstable training procedures and suboptimal performance. First, we apply pixel-level alignment via image-to-image translation to reduce the appearance discrepancy between different domains. Then, a focal multi-domain discriminator is utilized to train the teacher–student peer networks for gradually distilling domain adaptive knowledge in a cooperative manner. Finally, reliable pseudo labels obtained by the adapted teacher detector are further utilized to retrain the teacher–student models. Our proposed method can boost the transferability of the teacher model as well as enhance the student model to meet the demand of real-time applications. Comprehensive experiments on four different cross-domain datasets show that our PCdKD outperforms most existing state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call