Transfer learning is an effective means by leveraging rich information of the labeled samples from the source domain to build an appropriate classifier for annotating the target domain. Existing transfer learning algorithms usually try to minimize the distribution difference between different domains by means of Maximum Mean Discrepancy (MMD) based metric criteria. However, the domain adaptation methods based on MMD usually have the following two problems: 1) the local geometry information of the inter-domain data is ignored, which may cause to negative transfer effect. 2) The classifier and the feature learning are not integrated into one framework, in which the global optimal solution may be not obtained. Therefore, a Robust Manifold Discriminative Distribution Adaptation (RMDDA) model is presented for transfer subspace learning. Because the difference between domains leads to the large deviation of the original features of data in the source and target domains, it is difficult to construct the manifold structure of inter-domain data. A new feature termed Histogram Feature of Neighbors (HFON) is firstly introduced to rightly capture the domain locality. Secondly, the marginal distribution, conditional distribution, and discriminative regression are integrated into a model, which can solve the problem of separating feature learning from classifier training. It is considered that the dimension of feature extraction is limited by the number of classes which is determined by discriminative regression, we introduce double projection matrices into the model. Lastly, the numerous experiments on six benchmark cross-domain image datasets prove the superiority of RMDDA algorithm compared with some state-of-the-art algorithms.
Read full abstract