Abstract

The goal of cross-domain sentiment classification is to utilise useful information in the source domain to help classify sentiment polarity in the target domain, which has a large number of unlabelled data. Most of the existing methods focus on extracting the invariant features between two domains. But they cannot make better use of the unlabelled data in the target domain. To solve this problem, we present a deep transfer learning mechanism (DTLM) for fine-grained cross-domain sentiment classification. DTLM provides a transfer mechanism to better transfer sentiment across domains by incorporating BERT(Bidirextional Encoder Representations from Transformers) and KL (Kullback-Leibler) divergence. We introduce BERT as a feature encoder to map the text data of different domains into a shared feature space. Then, we design a domain adaptive model using KL divergence to eliminate the difference of feature distribution between the source domain and target domain. In addition, we introduce the entropy minimisation and consistency regularisation to process unlabelled samples in the target domain. Extensive experiments on the datasets from YelpAspect, SemEval 2014 task 4 and Twitter not only demonstrate the effectiveness of our proposed method but also provide a better way for cross-domain sentiment classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.