Abstract

Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task. The model is influenced by irrelevant context and complex structure when modeling the relationship between aspect and sentiment expression. Existing studies mainly improve the performance of the ABSA task by fine-tuning the pre-trained language model(PLM) and extracting valid information(e.g., syntax and context). However, these approaches fail to exploit the performance of PLM and ignore the rich semantic knowledge contained in PLM. To alleviate these problems, we propose a Prompt Semantic Augmented Network (PSAN), which improves the performance of ABSA by fine-tuning the PLM and leveraging the semantic knowledge contained in the PLM. Specifically, we propose semantic augmented prompt tuning (SAPT), which obtains a more relevant representation of the task by constructing prompt templates for the input sentences. Meanwhile, we construct corresponding label templates for the prompt templates and use them as the labels of the prompt templates, by which the sentence information containing the real labels is introduced into the model as a prior knowledge. Meanwhile, we use self-attention and graph convolutional networks (GCN) to obtain contextual and syntactic information. And finally, we aggregate these three types of information to perform the ABSA task. Experiments show that our SAPT can improve the performance of the PLM compared to classical fine-tuning, resulting in better results. Our proposed SAPT can obtain richer prompt semantic information than prompt tuning. Experimental results on three public benchmark datasets show that our model outperforms state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call