Abstract

We propose a novel approach for unsupervised visual domain adaptation that exploits auxiliary information in a target domain. The key idea is to embed data in the target domain into a subspace where samples are better organized, expecting auxiliary information to serve as a somewhat semantically related signal. Specifically, we apply partial least squares (PLS) to RGB image features and corresponding depth features captured at the same time. Thus, we can improve the performance of domain adaptation without any help from manual annotation in the target domain. In experiments, we tested our approach with two state-of-the-art subspace based domain adaptation methods and show that, our method consistently improves the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call