Abstract

Top performing algorithms are trained on massive amounts of labeled data. Alternatively, domain adaptation (DA) provides an attractive way to address the few labeled tasks when the labeled data from a different but related domain are available. Motivated by Fisher criterion, we present the novel discriminative regularization term on the latent subspace which incorporates the latent sparse domain transfer (LSDT) model in a unified framework. The key underlying idea is to make samples from one class closer and farther away from different class samples. However, it is nontrivial to design the efficient optimization algorithm. Instead, we construct a convex surrogate relaxation optimization constraint to ease this issue by alternating direction method of multipliers (ADMM) algorithm. Subsequently, we generalize our model in the reproduced kernel Hilbert space (RKHS) for tracking the nonlinear domain shift. Empirical studies demonstrate the performance improvement on the benchmark vision dataset Caltech-4DA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call