Abstract

Recently, pre-trained language models (PLMs), especially pre-trained bidirectional encoder representations from transformers (BERT), have improved the performance of aspect-based sentiment analysis (ABSA) tasks to some extent. However, due to the imbalance of training data in different polarities, the following shortcomings remain in PLM-based ABSA methods: (1) for small corpus scenarios with polarized emotions, an unbalanced performance problem exists; and (2) for delicate and obscure scenes dominated by neutral emotions, PLM-based performance gains are limited. To address these shortcomings, we use BERT as an instance of PLMs to propose a general-purpose prompt model with combined semantic refinement for ABSA. First, we utilize a BERT without fine-tuning to automatically induce prompts for various ABSA datasets to enhance the adaptability of the model to different application scenarios. We then leverage multi-prompt learning to propose a data augmentation method to address the imbalance of training data in different polarities. Moreover, to further deepen the model's understanding and analysis of reviews with prompts, we also propose an improved BERT semantic refinement method that combines global semantic refinement and local semantic extraction. Experiments on five public datasets show that compared with existing methods, our macro-average F1 improvement is over 10% on polarized small datasets and over 7% on an emotionally delicate and obscure dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call