Abstract

Intelligent systems driven by deep learning have become relevant in real-world applications with the increasing availability of technology and data. However, real-world settings require effective and robust deep learning models that are able to deal with unforeseen samples and a variety of data distributions. Recently, Unsupervised Domain Adaptation (UDA) for deep learning models (D-UDA) addresses such limitations by transferring knowledge from a labeled source domain to an unlabeled target domain, reducing the dataset shift between domain distributions. However, despite recent advances in D-UDA, current works have not been focused on studying specific cases in the distribution shifts under which D-UDA methods can ensure that transfer is helpful, avoiding a ‘negative transfer’ risk. In this paper, we present a study about the effect of different cases of negative transfer over the most popular and recent D-UDA methods reported in the literature. For this, we evaluate the accuracy performance of D-UDA methods over different scenarios containing different types of distribution shifts. Experimental results show that specific cases of distribution shifts generate negative transfer over the evaluated D-UDA methods. From this study, we provide some insights to select and design robust D-UDA methods in intelligent systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call