Abstract
Cross-domain object detection uses knowledge from source domain tasks to enhance the object detection in target domain. It can reduce the workload of data annotations in the new domain and significantly improve the adaptation ability of the network. We consider a more realistic transfer scenario, that is, our target domain samples cannot be obtained at the same time so that learners cannot forget the old domain knowledge while learning the new domain knowledge, which is consistent with the basic assumption of incremental learning. However, the existing methods do not consider the sequential learning process of the multiple target domains which may cause performance degradation. To address this issue, we propose a novel multi-domain adaptation method for object detection based on incremental learning. Specifically, the incremental learning network saves the knowledge of multiple domains and makes the model to fuse the knowledge of different domains during the training effectively. To make it work better, the progressive training strategy is proposed to make the model gradually adapt to multiple domains. Moreover, we use a multi-level feature alignment module to ensure that domain alignment is realized on features at various levels. We perform experiments on two sets of 6 datasets, which demonstrate that our model can effectively solve the problem of domain knowledge forgetting in multi-target domain adaptation and significantly improve the detection accuracy in each domain.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.