Abstract

SummaryCross‐domain sentiment classification is a significant task of sentiment analysis that objectives to predict the opinion orientation of text documents in the target domain by using the source domain's learned classifier. Most of the existing approaches of domain‐adaptation in sentiment classification focus on sharing low‐dimensional features across the domain using domain independent and specific features to mitigate the gap between domains. Earlier cross‐domain sentiment classification approaches mainly focused on document level and sentence level, they cannot consider the full impact of aspect words, position of the words, and long‐term dependencies. To address this concern, we propose a model for cross‐domain sentiment classification, which is based on decoding‐enhanced BERT with disentangled attention (DeBERTa). DeBERTa is a pretrained language model based on transformer architecture. In this article, we perform sentence and aspect embedding to mine wordpiece information from text document. DeBERTa language‐model utilize disentangled attention mechanism and an enhanced mask decoder to understand the expression features. Disentangled attention mechanism is used to encode each word into two vectors (i.e., content and position vector). In order to predict the masked tokens during model pretraining, an enhanced mask decoder is employed, which incorporates absolute positions in the decoding layer. Finally, experiments are conducted on the benchmark dataset that demonstrates the superiority of fine‐tuned DeBERTa model for cross‐domain sentiment classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call