Abstract

AbstractThe discrepancy in data distribution between training and testing scenarios, as well as the inductive bias of convolutional neural networks towards image styles, reduces the model's generalization ability. Many unsupervised domain generalization methods based on feature decoupling suffer from an initial neglect of explicit decoupling of content and style features, resulting in content features that still contain considerable redundant information, thereby restricting improvements in generalization capability. To tackle this problem, this paper optimizes the learning process of domain‐invariant (content) features into an information compression issue, minimizing redundancy in content features. Furthermore, to enhance decoupled learning, this paper introduces innovative cross‐domain loss functions and image reconstruction modules that explicitly decouple and merge content and style across different domains. Extensive experiments demonstrate the method's significant enhancements over recent cutting‐edge approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call