ABSTRACT Unsupervised domain adaptation (UDA) techniques have the potential to enhance the transferability of neural network models in unknown scenarios and reduce the labelling costs associated with unlabelled datasets. Popular solutions to this challenging UDA task are adversarial training and self-training. However, current adversarial-based UDA methods emphasize only global or local feature alignment, which is insufficient for tackling the domain shift. In addition, self-training-based methods inevitably produce many wrong pseudo labels on the target domain due to bias towards the source domain. To tackle the above problems, this paper proposes a hybrid training framework that integrates global-local adversarial training and self-training strategies to effectively tackle global-local domain shift. First, the adversarial approach measures the discrepancies between domains from domain and category-level perspectives. The adversarial network incorporates discriminators at the local-category and global-domain levels, thereby facilitating global-local feature alignment through multi-level adversarial training. Second, the self-training strategy is integrated to acquire domain-specific knowledge, effectively mitigating negative migration. By combining these two domain adaptation strategies, we present a more efficient approach for mitigating the domain gap. Finally, a self-labelling mechanism is introduced to directly explore the inherent distribution of pixels, allowing for the rectification of pseudo labels generated during the self-training stage. Compared to state-of-the-art UDA methods, the proposed method gains 3.2 % , 1.21 % , 5.86 % , 6.16 % mIoU improvements on Rural → Urban, Urban → Rural, Potsdam → Vaihingen, Vaihingen → Potsdam, respectively.
Read full abstract