Abstract

Domain adaptation aims at extracting knowledge from auxiliary source domains to assist the learning task in a target domain. In classification problems, since the distributions of the source and target domains are different, directly using source data to build a classifier for the target domain may hamper the classification performance on the target data. Fortunately, in many tasks, there can be some features that are transferable, i.e., the source and target domains share similar properties. On the other hand, it is common that the source data contain noisy features which may degrade the learning performance in the target domain. This issue, however, is barely studied in existing works. In this paper, we propose to find a feature subset that is transferable across the source and target domains. As a result, the domain discrepancy measured on the selected features can be reduced. Moreover, we seek to find the most discriminative features for classification. To achieve the above goals, we formulate a new sparse learning model that is able to jointly reduce the domain discrepancy and select informative features for classification. We develop two optimization algorithms to address the derived learning problem. Extensive experiments on real-world data sets demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call