Abstract

Domain generalization (DG) aims to learn a model that generalizes well to an unseen test distribution. Mainstream methods follow the domain-invariant representational learning philosophy to achieve this goal. However, due to the lack of priori knowledge to determine which features are domain-specific and task-independent, and which features are domain-invariant and task-relevant, existing methods typically learn entangled representations, limiting their capacity to generalize to the distribution-shifted target domain. To address this issue, in this paper, we propose novel Disentangled Domain-Invariant Feature Learning Networks (D2IFLN) to realize feature disentanglement and facilitate domain-invariant feature learning. Specifically, we introduce a semantic disentanglement network and a domain disentanglement network, disentangling the learned domaininvariant features from both domain-specific class-irrelevant features and domain-discriminative features. To avoid the semantic confusion in adversarial learning for domain-invariant feature learning, we further introduce a graph neural network to aggregate different domain semantic features during model training. Extensive experiments on three DG benchmarks show that the proposed D2IFLN performs better than the state-of-theart.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call