Abstract

Existing domain adaptation methods for classifying textual emotions have the propensity to focus on single-source domain exploration rather than multi-source domain adaptation. The efficacy of emotion classification is hampered by the restricted information and volume from a single source domain. Thus, to improve the performance of domain adaptation, we present a novel multi-source domain adaptation approach for emotion classification, by combining broad learning and deep learning in this article. Specifically, we first design a model to extract domain-invariant features from each source domain to the same target domain by using BERT and Bi-LSTM, which can better capture contextual features. Then we adopt broad learning to train multiple classifiers based on the domain-invariant features, which can more effectively conduct multi-label classification tasks. In addition, we design a co-training model to boost these classifiers. Finally, we carry out several experiments on four datasets by comparison with the baseline methods. The experimental results show that our proposed approach can significantly outperform the baseline methods for textual emotion classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call