Abstract

Domain adaptation (DA) tackles the problem where data from the training set (source domain) and test set (target domain) have different underlying distributions. For instance, training and testing images may be acquired under different environments, viewpoints and illumination conditions. In this paper, we focus on the more challenging unsupervised DA problem where the samples in the target domain are unlabeled. It is noticed that dictionary learning has gained a lot of popularity due to the fact that images of interest could be reconstructed sparsely in an appropriately learned dictionary [1]. Specifically, we propose a novel domain-adaptive dictionary learning approach to generate a set of intermediate domains which bridge the gap between source and target domains. Our approach defines two types of dictionaries: a common dictionary and a domain-specific dictionary. The overall learning process illustrated in Figure 1 consists of three steps: (1) At the beginning, we first learn the common dictionary DC, domainspecific dictionaries D0 and Dt for source and target domains. (2) At the k-th step, we enforce the recovered feature representations of target data in all available domains to have the same sparse codes, while adapting the most recently obtained dictionary Dk to better represent the target domain. Then we multiply dictionaries in the k-th domain with the corresponding sparse codes to recover feature representations of target data Xk t in this domain. (3) We update Dk to find the next domain-specific dictionary Dk+1 by further minimizing the reconstruction error in representing the target data. Then we alternate between the steps of sparse coding and dictionary updating until the stopping criteria is satisfied. Notations: Let X s ∈ Rd×Ns , X t ∈ Rd×Nt be the feature representations of source and target data respectively, where d is the feature dimension, Ns and Nt are the number of samples in the two domains. The feature representations of recovered source and target data in the k-th intermediate domain are denoted as Xs ∈ Rd×Ns and Xk t ∈ Rd×Nt respectively. The common dictionary is denoted as DC, whereas source-specific and targetspecific dictionaries are denoted as D0, Dt respectively. Similarly, we use Dk,k = 1...N to denote the domain-specific dictionary for the k-th domain, where N is the number of intermediate domains. We set all the dictionaries to be of the same size ∈Rd×n. At the beginning, we learn the common dictionary DC by minimizing the reconstruction error of both source and target data as follows:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call