Deep neural networks suffer from severe performance degradation when facing a distribution shift between the labeled source domain and unlabeled target domain. Domain adaptation addresses this issue by aligning the feature distributions of both domains. Conventional methods assume that the labeled source samples are drawn from a single data distribution (domain) and can be fully accessed during training. However, in real applications, multiple source domains with different distributions often exist, and source samples may be unavailable due to privacy and storage constraints. To address multi-source and data-free challenges, Multi-Source-Free Domain Adaptation (MSFDA) uses only diverse pre-trained source models without requiring any source data. Most existing MSFDA methods adapt each source model to the target domain individually, making them ineffective in leveraging the complementary transferable knowledge from different source models. In this paper, we propose a novel COnsistency-guided multi-source-free Domain Adaptation (CODA) method, which leverages the label consistency criterion as a bridge to facilitate the cooperation among source models. CODA applies consistency regularization on the soft labels of weakly- and strongly-augmented target samples from each pair of source models, allowing them to supervise each other. To achieve high-quality pseudo-labels, CODA also performs a consistency-based denoising to unify the pseudo-labels from different source models. Finally, CODA optimally combines different source models by maximizing the mutual information of the predictions of the resulting target model. Extensive experiments on four benchmark datasets demonstrate the effectiveness of CODA compared to the state-of-the-art methods.
Read full abstract