Abstract

Sentiment analysis aims to determine the sentiment orientation of a text piece (sentence or document), but many practical applications require more in-depth analysis, which makes finer-grained sentiment classification the ideal solution. Aspect-level Sentiment Classification (ALSC) is a task that identifies the emotional polarity for aspect terms in a sentence. As the mainstream Transformer framework in sentiment classification, BERT-based models apply self-attention mechanism that extracts global semantic information for a given aspect, while a certain proportion of local information is missing in the process. Although recent ALSC models have achieved good performance, they suffer from robustness issues. In addition, uneven distribution of samples greatly hurts model performance. To address these issues, we present the PConvBERT (Prompt-ConvBERT) and PConvRoBERTa (Prompt-ConvRoBERTa) models, in which local context features learned by a Local Semantic Feature Extractor (LSFE) are fused with the BERT/RoBERTa global features. To deal with the robustness problem of many deep learning models, adversarial training is applied to increase model stability. Additionally, Focal Loss is applied to alleviate the impact of unbalanced sample distribution. To fully explore the ability of the pre-training model itself, we also propose natural language prompt approaches that better solve the ALSC problem. We utilize masked vector outputs of templates for sentiment classification. Extensive experiments on public datasets demonstrate the effectiveness of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call