Abstract

Existing domain adaptation methods for classifying textual emotions have the propensity to focus on single-source domain exploration rather than multi-source domain adaptation. The efficacy of emotion classification is hampered by the restricted information and volume from a single source domain. Thus, to improve the performance of domain adaptation, we present a novel multi-source domain adaptation approach for emotion classification, by combining broad learning and deep learning in this article. Specifically, we first design a model to extract domain-invariant features from each source domain to the same target domain by using BERT and Bi-LSTM, which can better capture contextual features. Then we adopt broad learning to train multiple classifiers based on the domain-invariant features, which can more effectively conduct multi-label classification tasks. In addition, we design a co-training model to boost these classifiers. Finally, we carry out several experiments on four datasets by comparison with the baseline methods. The experimental results show that our proposed approach can significantly outperform the baseline methods for textual emotion classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.