Semi-supervised learning (SSL) suffers from severe performance degradation when labeled and unlabeled data come from inconsistent and imbalanced distribution. Nonetheless, there is a lack of theoretical guidance regarding a remedy for this issue. To bridge the gap between theoretical insights and practical solutions, we embark to an analysis of generalization bound of classic SSL algorithms. This analysis reveals that distribution inconsistency between unlabeled and labeled data can cause a significant generalization error bound. Motivated by this theoretical insight, we present a Triplet Adaptation Framework (TAF) to reduce the distribution divergence and improve the generalization of SSL models. TAF comprises three adapters: Balanced Residual Adapter, aiming to map the class distribution of labeled and unlabeled data to a uniform distribution for reducing class distribution divergence; Representation Adapter, aiming to map the representation distribution of unlabeled data to labeled one for reducing representation distribution divergence; and Pseudo-Label Adapter, aiming to align the predicted pseudo-labels with the class distribution of unlabeled data, thereby preventing erroneous pseudo-labels from exacerbating representation divergence. These three adapters collaborate synergistically to reduce the generalization bound, ultimately achieving a more robust and generalizable SSL model. Extensive experiments across various robust SSL scenarios validate the efficacy of our method.
Read full abstract