Abstract

Adversarial learning has become an effective paradigm for learning transferable features in domain adaptation. However, many previous adversarial domain adaptation methods inevitably damage the discriminative information contained in transferable features, which limits the potential of adversarial learning. In this paper, we explore the reason for this phenomenon and find that the model pays more attention to the alignment of feature norms than the learning of domain-invariant features during adversarial adaptation. Moreover, we observe that the feature norms contain some crucial category information, which is ignored in previous studies. To achieve better adversarial adaptation, we propose two novel feature norms alignment strategies: Histogram-guided Norms Alignment (HNA) and Transport-guided Norms Alignment (TNA). Both strategies model the feature norms from the distribution perspective, which not only facilitates the reduction of the norms discrepancy but also makes full use of discriminative information contained in the norms. Extensive experiments demonstrate that progressively aligning the feature norms distributions of two domains can effectively promote the capture of semantically rich shared features and significantly boost the model’s transfer performance. We hope our findings can shed some light on future research of adversarial domain adaptation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call