Abstract

Aspect category sentiment analysis (ACSA) excels at identifying the aspect categories and corresponding sentiments involved in a sentence, regardless of whether the aspect terms are explicitly mentioned or not. However, current methods tend to overinflate the original data, resulting in the introduction of unnecessary information, and fail to capture the inter-task relationship sufficiently. This paper presents a new method termed the prompt-based joint model (PBJM) to address these complications. PBJM treats the sentiment polarity prediction as binary classification and leverages a natural language prompt template, a concise sentence that guides the model to perform aspect category identification subtask and curtails the need for data augmentation. The two subtasks are jointly trained in pre-trained language models (PLMs) to capture their correlation. Further, the attention mechanism for aspect categories enables the model to concentrate selectively on significant features such as phrases and words during the predictions. In addition, the verbalizer employs a set of parameters to balance the weight of each label word while projecting between the label space and the label words space. Through experiments on four datasets, our model demonstrated remarkable performance in detecting category-sentiment pairs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call