Abstract

Deep adaptation networks exploit domain-invariant features of the source and target domains with some label-induced losses that need ground-truth labels as inputs. As target ground-truth labels are unavailable, existing approaches often utilize target-predicted labels that could be expressed in different forms, to formulate the label-induced losses. Specifically, the form of a hard label classifies a sample into one definite category, but it is overconfident when the quality of predicted labels is poor. In contrast, the form of a soft label gives probability values to each category for a sample, which could mitigate the overconfident problem. However, the soft label would introduce too many small and noisy probability values. Therefore, both label forms may mislead the optimizing process of label-induced losses and produce a negative adaptation. To deal with this challenge, this paper maximizes the nuclear norm of a soft label matrix to filter noisy probability values out while only preserving those important ones, to produce a more reliable form of importance filtered soft label. Then, we utilize the proposed label form to reformulate several prevalent label-induced losses in a probability-weighted manner, so that they could be optimized more correctly during the training process and alleviate the negative adaptation accordingly. Besides, we conduct the proposed scheme in a deep neural network dubbed as importance filtered soft label-based deep adaptation network (IFSL-DAN). Finally, we evaluate the proposed IFSL-DAN with extensive experiments on three cross-domain datasets to demonstrate its effectiveness compared with some state-of-art domain adaptation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call