Obtaining ground-truth label information from real-world data along with uncertainty quantification can be difficult or even infeasible. In the absence of labeled data for a certain task, unsupervised domain adaptation (UDA) techniques have shown great accomplishment by learning transferable knowledge from labeled source domain data and applying it to unlabeled target domain data, yet scarce studies consider addressing uncertainties under domain shifts to improve the model robustness. Distributionally robust learning (DRL) is emerging as a high-potential technique for building reliable learning systems that are robust to distribution shifts. In this study, we propose a distributionally robust unsupervised domain adaptation (DRUDA) method to enhance the machine learning model generalization ability under input space perturbations. The DRL-based UDA learning scheme is formulated as a min–max optimization problem by optimizing worst-case perturbations of the training source data. Our Wasserstein distributionally robust framework can reduce the shifts in the joint distributions across domains. The proposed DRUDA has been tested on digit datasets and the Office-31 dataset, compared with other state-of-the-art domain adaptation techniques. Our experimental results show that the proposed DRUDA leads to improvements in domain adaptation accuracy performance on target domains.
Read full abstract