Abstract

Unsupervised domain adaptive object detection (UDA-OD) is a challenging problem since it needs to locate and recognize objects while maintaining the generalization ability across domains. Most existing UDA-OD methods directly integrate the adaptive modules into the detectors. This integration procedure can significantly sacrifice the detection performances, though it enhances the generalization ability. To solve this problem, we propose an effective framework, named foregroundness-aware task disentanglement and self-paced curriculum adaptation (FA-TDCA), to disentangle the UDA-OD task into four independent subtasks of source detector pretraining, classification adaptation, location adaptation, and target detector training. The disentanglement can transfer the knowledge effectively while maintaining the detection performance of our model. In addition, we propose a new metric, i.e., foregroundness, and use it to evaluate the confidence of the location result. We use both foregroundness and classification confidence to assess the label quality of the proposals. For effective knowledge transfer across domains, we utilize a self-paced curriculum learning paradigm to train adaptors and gradually improve the quality of the pseudolabels associated with the target samples. Experiment results indicate that our method achieves state-of-the-art results on four cross-domain object detection tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call