Domain adaptation is an important subfield of transfer learning, it has been successfully applied in many applications of machine learning. Recently, significant theoretical and algorithmic advances have been achieved in domain adaptation. The theoretical analyses for domain adaptation are based on VC dimension and Rademacher complexity. There are also some covering number-based results, but most of these bounds are based on the results of Rademacher complexity, indirectly given by the relationship between covering number and Rademacher complexity. In this paper, we propose a theoretical analysis framework for domain adaptation, thus the error bound can be derived directly by covering number, which is an effective method for analyzing the generalization error in statistical learning theory. We derive generalization error bound for domain adaptation with a class of loss functions satisfying the assumptions. We also propose a mixup contrastive adversarial network for domain adaptation by introducing a mixup module for enhancing the alignment of the source and target domains during domain transfer, and a contrastive learning module for solving class-level alignment after domain transfer. Experimental results demonstrate the effectiveness of the proposed algorithm and the property of the theoretical results.
Read full abstract