Abstract

Pattern recognition is significantly challenging in real-world scenarios by the variability of visual statistics. Therefore, most existing algorithms relying on the independent identically distributed assumption of training and test data suffer from the poor generalization capability of inference on unseen testing datasets. Although numerous studies, including domain discriminator or domain-invariant feature learning, are proposed to alleviate this problem, the data-driven property and lack of interpretation of their principle throw researchers and developers off. Consequently, this dilemma incurs us to rethink the essence of networks' generalization. An observation that visual patterns cannot be discriminative after style transfer inspires us to take careful consideration of the importance of style features and content features. Does the style information related to the domain bias? How to effectively disentangle content and style features across domains? In this article, we first investigate the effect of feature normalization on domain adaptation. Based on it, we propose a novel normalization module to adaptively leverage the propagated information through each channel and batch of features called disentangling batch instance normalization (D-BIN). In this module, we explicitly explore domain-specific and domaininvariant feature disentanglement. We maneuver contrastive learning to encourage images with the same semantics from different domains to have similar content representations while having dissimilar style representations. Furthermore, we construct both self-form and dual-form regularizers for preserving the mutual information (MI) between feature representations of the normalization layer in order to compensate for the loss of discriminative information and effectively match the distributions across domains. D-BIN and the constrained term can be simply plugged into state-of-the-art (SOTA) networks to improve their performance. In the end, experiments, including domain adaptation and generalization, conducted on different datasets have proven their effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call