Abstract

Unsupervised domain adaptation without accessing expensive annotation processes of target data has achieved remarkable successes in semantic segmentation. However, most existing state-of-the-art methods cannot explore whether semantic representations across domains are transferable or not, which may result in the negative transfer brought by irrelevant knowledge. To tackle this challenge, in this paper, we develop a novel Knowledge Aggregation-induced Transferability Perception (KATP) for unsupervised domain adaptation, which is a pioneering attempt to distinguish transferable or untransferable knowledge across domains. Specifically, the KATP module is designed to quantify which semantic knowledge across domains is transferable, by incorporating transferability information propagation from global category-wise prototypes. Based on KATP, we design a novel KATP Adaptation Network (KATPAN) to determine where and how to transfer. The KATPAN contains a transferable appearance translation module T_A() and a transferable representation augmentation module T_R(), where both modules construct a virtuous circle of performance promotion. T_A() develops a transferability-aware information bottleneck to highlight where to adapt transferable visual characterizations and modality information; T_R() explores how to augment transferable representations while abandoning untransferable information, and promotes the translation performance of T_A() in return. Experiments on several representative datasets and a medical dataset support the state-of-the-art performance of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call