Abstract

Traditional machine learning methods have always performed the learning tasks solely. The models of these conventional methods have to be built from scratch. The idea of domain adaptation can overcome this learning procedure by adapting the knowledge gained from one domain to another. With datasets from several domains, this research proposes a novel strategy for unsupervised visual domain adaptation. Existing transfer learning approaches try to reduce the domain shift using maximum mean discrepancy metric. We propose a combined framework termed Visual Domain Adaptation through Locality Information (DALI) that lowers the geometrical distance between the domains while probabilistically addressing the shortcomings. The proposed approach employs the input data features to develop a technique that assists in inducing the transfer of information from one domain to another. Extensive experimental findings demonstrate that the proposed method DALI excels over traditional domain adaptation algorithms as well as CNN architectures on a variety of cross-domain object, facial, and digit recognition tasks including Office-Caltech, Office-Home, PIE and COIL datasets. Our proposed model has improved the mean accuracy to 69.69%, exceeding the most recent state-of-the-art techniques when performed a comprehensive analysis on the Office-Home data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call