Abstract

Data-driven dictionaries have produced the state-of-the-art results in various classification tasks. However, when the target data has a different distribution than the source data, the learned sparse representation may not be optimal. In this paper, we investigate if it is possible to optimally represent both source and target by a common dictionary. In particular, we describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both the domains in the projected low-dimensional space. The algorithm is modified to learn a common discriminative dictionary, which can further improve the classification performance. The algorithm is also effective for adaptation across multiple domains and is extensible to nonlinear feature spaces. The proposed approach does not require any explicit correspondences between the source and target domains, and yields good results even when there are only a few labels available in the target domain. We also extend it to unsupervised adaptation in cases where the same feature is extracted across all domains. Further, it can also be used for heterogeneous domain adaptation, where different features are extracted for different domains. Various recognition experiments show that the proposed method performs on par or better than competitive state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call