Feature disentanglement techniques have been widely employed to extract transferable (domain-invariant) features from non-transferable (domain-specific) features in Unsupervised Domain Adaptation (UDA). However, due to the complex interplay among high-dimensional features, the separated “non-transferable” features may still be partially informative. Suppressing or disregarding them, as commonly employed in previous methods, can overlook the inherent transferability. In this work, we introduce two concepts: Partially Transferable Class Features and Partially Transferable Domain Features (PTCF and PTDF), and propose a succinct feature disentanglement technique. Different with prior works, we don’t seek to thoroughly peel off the non-transferable features, as it is challenging practically. Instead, we take the two-stage strategy consisting of rough feature disentanglement and dynamic adjustment. We name our model as ELT because it can systematically Explore Latent Transferability of feature components. ELT can automatically evaluate the transferability of internal feature components, dynamically giving more attention to features with high transferability and less to features with low transferability, effectively solving the problem of negative transfer. Extensive experimental results have proved its efficiency. The code and supplementary file will be available at https://github.com/njtjmc/ELT.
Read full abstract