Abstract

Despite the considerable advances in deep learning for object recognition, there are still several factors that hinder the performance of deep learning models. One of these factors is domain shift, which occurs due to variations in the distribution of the testing and training data. This paper addresses the issue of compact feature clustering in domain generalization, with the aim of optimizing the embedding space from multi-domain data. Specifically, we propose a domain-aware triplet loss for domain generalization, which not only facilitates clustering of similar semantic features but also disperses features that arise from the domain. Unlike previous methods that focus on aligning distributions, our algorithm disperses domain information in the embedding space. Our approach is based on the assumption that embedding features can be clustered based on domain information, which is supported mathematically and empirically in this paper.Furthermore, in our investigation of feature clustering in domain generalization, we observe that the factors that influence the convergence of metric learning loss in domain generalization are more significant than the pre-defined domains. To address this issue, we utilize two methods to normalize the embedding space and reduce the internal covariate shift of the embedding features. Our ablation study illustrates the effectiveness of our algorithm. Additionally, our experiments on benchmark datasets, including PACS, VLCS, and Office-Home, demonstrate that our method outperforms related approaches that focus on domain discrepancy. Notably, our results on RegnetY-16GF are substantially better than state-of-the-art methods on the benchmark datasets. Our code is available at https://github.com/workerbcd/DCT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call