Abstract

The performance of a conventional machine learning the model trained on a source domain degrades poorly when they are tested on a different data distribution (target domain). These traditional models deal with this problem by training a new paradigm for the particular different data distribution (target domain). Therefore, training of a new paradigm forthe individual data distribution is computationally expensive. This paper demonstrates that how to adapt to a new data distribution (target domain), utilising the model trained on the source domain and avoiding the cost of re-training and the need for access to the source labelled data. In particular, we introduce an Efficient Semi-supervised Cluster-than-Label Cross-domain Adaptation Algorithm (SCTLCDA) to address the cross-domain adaptation classification problem in which we utilised both labelled and unlabelled data samples in the target domain, as well as completely unlabelled data samples in the source domain. Subsequently, we also describe that our proposed method can manage large datasets and easily lead to cross-domain adaptation problem. The effectiveness and performance of our method are confirmed by experiments on two real-world applications: Cross-domain sentiments and Web-Spam classification problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.